Showing posts with label Edge and Fog Computing. Show all posts
Showing posts with label Edge and Fog Computing. Show all posts

Friday, March 22, 2024

Research Challenges for the Advancement of Vehicular Networking

It's been a while since we covered V2X as a topic on this blog. If you are not well versed with CAVs and V2X, we recommend you to watch our tutorials on the 3G4G page here.

The networking channel hosted a seminar on 'Vehicular networking' last month. Quoting from the webinar preview:

Looking back at the last decade, one can observe enormous progress in the domain of vehicular networking. Many ongoing activities focus on the design of cooperative perception, distributed computing, and novel safety solutions. Many projects have been initiated to validate the theoretic work in field tests and protocols are being standardized. We are now entering an era that might change the game in road traffic management. Many car makers already supply their recent brands with cellular and Wi-Fi modems, also adding C-V2X and ITS-G5 technologies. We now intend to shift the focus from basic networking principles to open challenges in cooperative computing support and even on how to integrate so-called vulnerable road users into the picture. Edge computing is currently becoming one of the core building blocks of cellular networks, including 5G, and it is necessary to study how to integrate ICT components of moving systems. The panellists will discuss from an industrial perspective the main research challenges for the advancement of vehicular networking and the novelties that we can expect to see coming in the short term. Panellists with extensive experience in Internet measurements, networks related to sustainable development goals, and highly-localized earth observation networks will discuss these topics and participate in a Q&A session with the audience.

The presentations were not shared but the video of the panel discussion is as follows:

The following speakers presented the following talks:

  • Vehicular Networking? by Onur Altintas, Toyota North America R&D (0:04:55)
  • Collaborative Perception Sharing for Connected Autonomous Vehicles by Fan Bai, General Motors Global R&D (0:15:00)
  • The future of vehicular networking by Frank Hofmann, Robert Bosch GmbH (0:23:25)
  • The future of vehicular networks and path to 6G by Dr.-Ing. Volker Ziegler, Nokia (0:35:15)
  • Panel Discussion with all speakers and  (0:44:30)

Related Posts

Wednesday, March 22, 2023

An Introduction to Multi-access Edge Computing (MEC) in 5G

We have covered some detailed webinars and presentations on MEC (Multi-access Edge Computing) but people have often asked us to add some basic tutorial on this topic. Wray Castle hosted a webinar last year on this topic. The abstract of that says: 

MEC extends the NFV concept to deploy the virtualization resources at the network edge rather than in the core/public network. MEC thus provides a significant opportunity to offer hosting for a range of novel applications, including those deployed by third parties, but it introduces new challenges in terms of the capabilities needed, and need to deploy applications onto the right edge systems for these benefits to be realised. This webinar explores these topics and presents an overview of the ETSI standardised architecture for MEC.

The webinar video is embedded below:

Related Posts

Tuesday, May 31, 2022

Transitioning from Cloud-native to Edge-Native Infrastructure

We have looked at what we mean by cloud-native in an earlier post here. Recently we also looked at edge-native infrastructure here. While we have been debating between cloud and edge for a while, in a new presentation (embedded below), Gorkem Yigit, Principal Analyst, Analysys Mason argues that the new, distributed IT/OT applications will drive the shift from cloud-native to edge-native infrastrcuture.

The talk by Gorkem on '5G and edge network clouds: industry progress and the shape of the new market' from Layer123 World Congress 2021 is as follows:

A blog post by ADVA has a nice short summary of the image on the top that was also presented at a webinar earlier. The following is an extract from that blog post: 

The diagram compares hyperscale (“cloud-native infrastructure”) on the left with hyper-localized (“edge-native infrastructure”) on the right.

  • Computing: The traditional hyperscale cloud is built on centralized and pooled resources. This approach enables unlimited scalability. In contrast, compute at the edge has limited scalability, and may require additional equipment to grow applications. But the initial cost at the edge is correspondingly low, and grows linearly with demand. That compares favorably to the initial cost for a hyperscale data center, which may be tens of millions of dollars.
  • Location sensitivity and latency: Users of the hyperscale data center assume their workloads can run anywhere, and latency is not a major consideration. In contrast, hyper-localized applications are tied to a particular location. This might be due to new laws and regulations on data sovereignty that require that information doesn’t leave the premises or country. Or it could be due to latency restrictions as with 5G infrastructure. In either case, shipping data to a remote hyperscale data center is not acceptable.
  • Hardware: Modern hyperscale data centers are filled with row after row of server racks – all identical. That ensures good prices from bulk purchases, as well as minimal inventory requirements for replacements. The hyper-localized model is more complicated. Each location must be right-sized, and supply-chain considerations come into play for international deployments. There also may be a menagerie of devices to manage.
  • Connectivity: Efficient use of hyperscale data centers depends on reliable and high-bandwidth connectivity. That is not available for some applications. Or they may be required to operate when connectivity is lost. An interesting example of this case is data processing in space, where connectivity is slow and intermittent.
  • Cloud stack: Hyperscale and hyper-localized deployments can host VMs and containers. In addition, hyper-localized edge clouds can host serverless applications, which are ideal for small workloads.
  • Security: Hyperscale data centers use a traditional perimeter-based security model. Once you are in, you are in. Hyper-localized deployments can provide a zero-trust model. Each site is secured as with a hyperscale model, but each application can also be secured based on specific users and credentials.

You don’t have to choose upfront

So, which do you pick? Hyperscale or hyper-localized?

The good news is that you can use both as needed, if you make some good design choices.

  • Cloud-native: You should design for cloud-native portability. That means using technologies such as containers and a micro-services architecture.
  • Cloud provider supported edge clouds: Hyperscale cloud providers are now supporting local deployments. These tools enable users to move workloads to different sites based on the criteria discussed above. Examples include IBM Cloud Satellite, Amazon Outposts, Google Anthos, Azure Stack and Azure Arc.

You can also learn more about this topic in the Analysys Mason webinar, “From cloud-native to edge-native computing: defining the cloud platform for new use cases.”. You can also download the slides from there after registration.

Related Posts

Monday, April 25, 2022

Edge Computing Tutorial from Transforma Insights

Jim Morrish, Founding Partner of Transforma Insights has kindly made an in-depth Edge Computing Tutorial for our channel. Slides and video is embedded below.

In this tutorial Jim covers the following topics:

  • Definitions of Edge Computing.
  • How and why Edge Computing is used.
  • Planning for deployment of Edge Computing.
  • Forecasts for Edge Computing.

We would love to know if this answers your questions on this topic. If not, please feel free to post your questions below.  

Related Posts

Tuesday, November 23, 2021

3GPP Presentations from CEATEC Japan 2021

3GPP and its Japanese Organizational Partners TTC (Telecommunication Technology Committee) and ARIB (Association of Radio Industries and Businesses) hosted a “3GPP Summit” online workshop at CEATEC 2021, back in October. The event was co-located with the Japanese Ministry of Internal Affairs and Communications (MIC) and 5G Mobile Communications Promotion Forum (5GMF) 5G day at the event. Here is a summary of the event from 3GPP news:

The “3GPP Summit” featured all three Technical Specification Group (TSG) Chairs and one Japanese leader from each group. After the presentations, they exchanged their views and expectations for 3GPP work – as the industry starts to look at research beyond 5G. The event attracted almost 700 people, keen to understand what is going on in 3GPP.

The first session covered Release 17 and 18 evolution, with each TSG Chair and a domestic leader jointly presenting. Wanshi Chen introduced the latest schedule of each release and potential projects for Release 18 with the result of 3GPP Release 18 workshop held in June. Then, Hiroki Takeda presented some key features on Release 17 such as Redcap, RAN slicing and evolution of duplex.

TSG SA Chair, Georg Mayer introduced the group’s latest activities alongside Satoshi Nagata, covering key Release 17 features, such as enhanced support on Non-public Networks, Industrial IoT and Edge computing.

Next up was the TSG CT Chair, Lionel Morand, presenting the latest activities and roadmap for Core Network evolution from Release 15 to 17. Hiroshi Ishikawa also presented, covering 5G core protocol enhancements and some activities driven by operators.

The second part of the session focused more on activities ‘Beyond 5G’. First, Takaharu Nakamura introduced the latest activities on the topic in Japan. A panel discussion followed, with Satoshi Nagata joining the other 3GPP speakers, to give feedback on 5G developments and future use.

You can download the PPT of presentations from 3GPP site here or get the PDF from 3G4G page here.

Please feel free to add your thoughts as comments below.

Related Posts

Monday, June 21, 2021

3GPP Standards on Edge Computing

A sub-set of 3GPP Market Representation Partners hosted a 2-part webinar series in April 2021 looking at edge computing for industry verticals and on-going standardisation work in 3GPP. The first part write-up is available here. The webinar was attended by a mix of organisations from both verticals and the telecommunication industry, helping to share a common understanding on edge computing. 

The webinar brought together top experts at the 3GPP plenary level, SA2 (Architecture) and SA6 (application enablement and critical communication applications) for a deep-dive into how 5G and related standards can help harmonise and enable technologies like edge computing and artificial intelligence to work together much more efficiently. 

The webinar was co-chaired by Georg Mayer, 3GPP SA Chairman and Stephanie Parker, Trust-IT and Vice-chair of the 5G-IA Pre-Standardisation WG with the John Favaro, Trust-IT and member of the 5G PPP Automotive Working Group. 

The webinar was attended by a mix of organisations from both verticals and the telecommunication industry, helping to share a common understanding on edge computing.

This video embedded below is the recording of the webinar on edge computing held on Thursday 22 April 2021 part 2 - 3GPP Standards on Edge Computing as an educational deep dive to help industry verticals gain a better understanding of an evolving landscape. It gives key insights into 3GPP standardisation work on edge computing with an overview of the main activities taking place within SA (System Aspects and Architecture). Presentations and panel discussions zoom in on the network layer with SA2 Architecture and on the application layer for vertical enablement with SA6 Application Enablement and Critical Communication Applications. The panel discussion with SA TSG, SA2 and SA6 chairmen sheds light on the role of artificial intelligence from both the network and application perspectives, underscoring the vital importance of industry verticals in the standardisation process to meet their specific requirements in 3GPP as a truly global initiative.

PDF of presentations as follows:

Global5G has a summary with main takeaways and poll findings here. The following is from there:

Main Takeaways

  1. 5G will help technologies like edge computing and artificial intelligence to harmonise and enable them to work together much more efficiently.
  2. 3GPP Release 17 is foundational for edge computing but more will come in future releases given its importance in mobile communications and as we gradually move beyond 5G. The webinar was therefore a timely deep-dive into today's landscape. 
  3. Artificial Intelligence and edge computing can both serve as building blocks but in different ways: 
    • Network layer perspectives: AI can further optimise edge computing applications.
    • Application layer persepctives: Edge computing can be a building block for AI, e.g. offloading limited capabilities from the device to the network.
  4. Global initiatives like 3GPP can help reduce regional fragmentation, drive convergence and enable network-compliant rollouts that benefit the ecosystem around the world.
  5. As a global initiative, 3GPP is well placed to build on its strong relationships and collaborations with ETSI MEC and GSMA. 
  6. It is absolutely essential that industry verticals get involved in 3GPP working groups, which is where key activities take place and where their requirements should be channelled. It is also important that verticals understand how their seemingly specific requirements could be relevant to other sectors. Being part of 3GPP is a complex but highly rewarding experience. It does not need to be a life-long commitment.

Poll Findings - Participant Viewpoints

Do you participate in standardization on edge computing?

Interestingly most respondents do not take part in any standardisation initiatives. Hence the webinar series was an opportunity to highlight the many activities taking place and encourage participants to get involved. Those that do take part mostly contribute to 3GPP and other forums (29%) like ETSI (SDO) and industry associations like 5GAA and 5G-ACIA as some of the early movers on edge computing. Beyond 3GPP, a smaller number of respondents (11%) contribute to ETSI and other forums such as 5GAA and GSMA and the same amount (11%) are involved in other forums.

How important do you think coordination on edge computing standardisation is?

Coordination on edge computing standardisation needs to be prioritised with 65% of respondents saying it's vital and another 33% saying it's quite important. Only 1 respondent said it's not needed. An important output via the 5G-IA Pre-Standardisation WG and supported by panellists and organisers (5G-IA, 5GAA, 5G-ACIA and PSCE) would be a user-friendly guide on edge computing standardisation to help stakeholders navigate the landscape. 

Do you see a need for new areas of standardisation for edge computing?

Findings from this poll are particularly interesting as we have a close split between those that think more standardisation work is needed (47%) and those that don't know (43%) with just 10% saying it's not needed. Webinar organisers have come up with two possible explanations. On the one hand, we may be looking at a fragmented landscape that would benefit from more unification, also from an architecture perspective. On the other hand, organisations looking at the landscape may simply be overwhelmed by the dverse activities taking place. They may also have new applications sitting on top of the network but are not sure if they need to be standardised. Practical guidance could go a long way in clarifying this uncertainty. 

Again, a quick guide on edge computing standardisation could be a useful output, highlighting also the good cooperation already taking place as an important step in the right direction. 

You can see Part 1 of this webinar here.

Related Posts

Saturday, June 19, 2021

Edge Computing - Industry Vertical Viewpoints


A sub-set of 3GPP Market Representation Partners hosted a 2-part webinar series in April 2021 looking at edge computing for industry verticals and on-going standardisation work in 3GPP. The webinar was attended by a mix of organisations from both verticals and the telecommunication industry, helping to share a common understanding on edge computing. 

The first webinar brought together experts from the 5G Automotive Association (5GAA), the 5G Alliance for Connected Industry and Automation (5G-ACIA), Edge Gallery, ETSI Multi-access edge computing (MEC) and the Automotive Edge Computing Consortium (AECC) to highlight opportunities and updates on how diverse market sectors can benefit from offloading data at the edge of the network. Further insights came from interactive discussions and polling with participants. This webinar is part of a 5G user webinar and workshop series designed for industry verticals co-hosted by 5G-IA, 5GAA, 5G-ACIA and PSCE as Market Representation Partners of 3GPP.

This video embedded below is the recording of the webinar on Tuesday 20 April on edge computing - part one, giving an educational deep dive on industry vertical viewpoints. 5GAA (5G Automotive Association) gives an overview of its white paper, use cases and upcoming trials for Cellular-V2X in the automotive sector. Edge Gallery shows how it is supporting the Industrial Internet of Things with its 5G open-source solutions and application development support. ETSI MEC explain its common and extensible application enabling platform for new business opportunities. 5G-ACIA (5G Alliance for Connected Industry and Automation) describes new work on the applicability of 5G industrual edge computing within the associaton. The Automotive Edge Computing Consortium (AECC) brings insights into how it is driving data to the edge.

Bios and PDF presentations as follows:

Global5G has a summary with main takeaways and poll findings here. The following is from there:

Main takeaways

  1. The webinar was an excellent deep-dive into the edge computing landscape highlighting on-going work in automotive, manufacturing and the Industrial Internet of Things, as well as standardisation work in ETSI and open-source approaches. 
  2. It illustrated the value of edge computing with strong signs coming from industry in terms of growing interest and adoption roadmaps. There is an impressive number of initiatives across the globe embracing edge computing, with examples of cooperation globally as seen in 5GAA, 5G-ACIA, AECC and ETSI MEC. 
  3. Industrial automation, digital twins and infrastructure control among the main drivers for growing demand. 
  4. Collaboration on edge computing is essential and will become even more important as applications increasingly move to the edge. Continued discussions are needed to have greater clarity at multiple layers: business and technology, SW and HW. Collaboration can also support efforts to educate consumers and businesses, both key to uptake and achieving network compliant rollout.  
  5. The collaboration underpinning the 3GPP MRP webinar series is an excellent example of how we can intensify joint efforts across the ecosystem working towards convergence and ensuring RoI, e.g. for telecom investments. 

Poll Findings - Participant viewpoints

Where would you position your organisation in terms of implementing edge computing?

Only 16% of respondents already have a commercial strategy in place for edge computing while 26% are starting to develop one. Therefore 42% are expected to have one in short term. 30% are at early learning stage to understand market opportunities and 28% are exploring its potential. 

In which verticals do you expect the first implementations other than automotive?

The automotive sector is an early mover in edge computing, as testified by 5GAA and AECC presentations in the webinar with both having published studies and white papers. 5GAA is planning trials in 2021 in various locations globally so another webinar on this topic in 2022 would be helpful. After automotive, manufacturing is expected to be the next sector to implement edge, as testified by the 5G-ACIA presentation. All three associations are market representation partners of 3GPP, with 5GAA also contributing to standardisation work. In the 5G PPP, 5GCroCo (cross-border automotive use cases) has contributed to standardisation activities of both 5GAA and AECC. Gaming, AR/VR and media is the next sector expected to adopt edge computing. 

What are your top 2 priority requirements for edge computing? 

Low latency is the top requirement for most respondents (33%) followed by interoperability and service continuity (both on 20.5%) with transferring and processing large volumes of data and very high reliability in joint third place (both on 12.8%). It' will be important to see how many of these requirements feature in early deployments as not all of them will be there at first rollout. The poll also shows how requirements combine together, e.g. 2 priority requirements: Low latency + very high reliability; Interoperability + Service continuity; Interoperability + Low latency; 3 requirements: Interoperability + Service continuity + Transferring and processing large volumes of data and 4 requirements: Interoperability + Service continuity + Low latency + Transferring and processing large volumes of data. 

Part 2 of this webinar is available here.

Related Posts

Saturday, October 10, 2020

What is Cloud Native and How is it Transforming the Networks?


Cloud native is talked about so often that it is assumed everyone knows what is means. Before going any further, here is a short introductory tutorial here and video by my Parallel Wireless colleague, Amit Ghadge.  

If instead you prefer a more detailed cloud native tutorial, here is another one from Award Solutions.

Back in June, Johanna Newman, Principal Cloud Transformation Manager, Group Technology Strategy & Architecture at Vodafone spoke at the Cloud Native World with regards to Vodafone's Cloud Native Journey 


Roz Roseboro, a former Heavy Reading analyst who covered the telecom market for nearly 20 years and currently a Consulting Analyst at Light Reading wrote a fantastic summary of that talk here. The talk is embedded below and selective extracts from the Light Reading article as follows:

While vendors were able to deliver some cloud-native applications, there were still problems ensuring interoperability at the application level. This means new integrations were required, and that sent opex skyrocketing.

I was heartened to see that Newman acknowledged that there is a difference between "cloud-ready" and "cloud-native." In the early days, many assumed that if a function was virtualized and could be managed using OpenStack, that the journey was over.

However, it soon became clear that disaggregating those functions into containerized microservices would be critical for CSPs to deploy functions rapidly and automate management and achieve the scalability, flexibility and, most importantly, agility that the cloud promised. Newman said as much, remarking that the jump from virtualized to cloud-native was too big a jump for hardware and software vendors to make.

The process of re-architecting VNFs to containerize them and make them cloud-native is non-trivial, and traditional VNF suppliers have not done so at the pace CSPs would like to see. I reference here my standard chicken and egg analogy: Suppliers will not go through the cost and effort to re-architect their software if there are no networks upon which to deploy them. Likewise, CSPs will not go through the cost and effort to deploy new cloud networks if there is no software ready to run on them. Of course, some newer entrants like Rakuten have been able to be cloud-native out of the gate, demonstrating that the promise can be realized, in the right circumstances.

Newman also discussed the integration challenges – which are not unique to telecom, of course, but loom even larger in their complex, multivendor environments. During my time as a cloud infrastructure analyst, in survey after survey, when asked what the most significant barrier to faster adoption of cloud-native architectures, CSPs consistently ranked integration as the most significant.

Newman spent a little time discussing the work of the Common NFVi Telco Taskforce (CNTT), which is charged with developing a handful of reference architectures that suppliers can then design to which will presumably help mitigate many of these integration challenges, not to mention VNF/CNF life cycle management (LCM) and ongoing operations.

Vodafone requires that all new software be cloud-native – calling it the "Cloud Native Golden Rule." This does not come as a surprise, as many CSPs have similar strategies. What did come as a bit of a surprise, was the notion that software-as-a-service (SaaS) is seen as a viable alternative for consuming telco functions. While the vendor with the SaaS offering may not itself be cloud-native (for example, it could still have hardware dependencies), from Vodafone's point of view, it ends up performing as such, given the lower operational and maintenance costs and flexibility of a SaaS consumption model.

If you have some other fantastic links, videos, resources on this topic, feel free to add in the comments.

Related Posts:

Wednesday, May 1, 2019

Webinar: Where Edge Meets Cloud by Dean Bubley


Dean Bubley, Outspoken Telecoms & Mobile Industry Analyst, Consultant & Chair/Speaker on Networks, Wireless, Internet, AI & Futurism (as stated in his LinkedIn profile), recently did a webinar on Edge computing for Apis Training. The video recording is available online and embedded below.


Couple of things worth highlighting (but do listen to the webinar, it's got lots of interesting stuff) is as shown in the picture above and below. One of the benefits of Edge is Low latency. If that is the driver then you need to know where your Edge should be because latency will be affected based on the location. Another important point worth remembering is how many Edge-compute facilities can you afford. Latency & the number of facilities are linked to each other so worth thinking about in the beginning as it may not be straightforward to change later.



Anyway, here is the recording of the webinar.



Related post:



Tuesday, March 12, 2019

Can Augmented & Mixed Reality be the Killer App 5G needs?


Last October Deutsche Telekom, Niantic and MobiledgeX announced a partnership to create advanced augmented reality experiences over mobile network technologies. I was lucky to find some time to go and play it at Deutsche Telekom booth. The amount of processing needed for this to work at best also meant that the new Samsung Galaxy S10+ were needed but I felt that it also occasionally struggled with the amount of data being transferred.


The pre-MWC press release said:

Deutsche Telekom, Niantic Inc., MobiledgeX and Samsung Showcase World’s First Mobile Edge Mixed Reality Multi-Gamer Experience

At the Deutsche Telekom booth at MWC 2019 (hall 3, booth 3M31) the results of the previously announced collaboration between Deutsche Telekom, Niantic, Inc., and MobiledgeX are on display and you’re invited to play. Niantic’s “Codename: Neon”, the world’s first edge-enhanced Mixed Reality Multiplayer Experience, delivered by ultra-low latency, Deutsche Telekom edge-enabled network, and Samsung Galaxy S10+ with edge computing enablement, will be playable by the public for the first time. 

“The ultra-low latency that Mobile Edge Computing (MEC) enables, allows us to create more immersive, exciting, and entertaining gameplay experiences. At Niantic, we’ve long celebrated adventures on foot with others, and with the advent of 5G networks and devices, people around the world will be able to experience those adventures faster and better,” said Omar Téllez, Vice-President of Strategic Partnerships at Niantic.

The collaboration is enabled using MobiledgeX’s recently announced MobiledgeX Edge-Cloud R1.0 product. Key features include device and platform-independent SDKs, a Distributed Matching Engine (DME) and a fully multi-tenant control plane that supports zero-touch provisioning of edge cloud resources as close as possible to the users. Immediate examples of what this enables include performance boosts for Augmented Reality and Mixed Reality (MR) experiences as well as video and image processing that meets local privacy regulations. 

Samsung has been working together with Deutsche Telekom, MobiledgeX, and Niantic on a natively edge-capable connectivity and authentication in Samsung Galaxy S10+ to interface with MobiledgeX Edge-Cloud R1.0 and dynamically access the edge infrastructure it needs so that augmented reality and mixed reality applications can take advantage of edge unmodified. Samsung will continue such collaborations with industry-leading partners not only to embrace a native device functionality of edge discovery and usage for the mobile devices and consumers, but also to seek a way together to create new business models and revenue opportunities leading into 5G era.

Deutsche Telekom’s ultra-low latency network was able to deliver on the bandwidth demands of “Codename: Neon” because it deployed MobiledgeX’s edge software services, built on dynamically managed decentralized cloudlets. “From our initial partnership agreement in October, we are thrilled to showcase the speed at which we can move from idea to experience, with full end-to-end network integration, delivered on Samsung industry leading edge native devices,” said Alex Jinsung Choi, Senior Vice President Strategy and Technology Innovation at Deutsche Telekom.

From the gaming industry to industrial IoT, and computer vision applications, consumer or enterprise, the experience is a great example of interactive AR experiences coming from companies like Niantic in the near future.  As AR/VR/MR immersive experiences continue to shape our expectations, devices, networks and clouds need to seamlessly and dynamically collaborate.

This video from Deutsche Telekom booth shows how the game actually feels like



Niantic CEO John Hanke delivered a keynote at Mobile World Congress 2019 (embedded below). According to Fortune article, "Why the Developer of the New 'Harry Potter' Mobile Game and 'Pokemon Go' Loves 5G":

Hanke showed a video of a prototype game Niantic has developed codenamed Neon that allows multiple people in the same place at the same time to play an augmented reality game. Players can shoot at each other, duck and dodge, and pick up virtual reality items, with each player’s phone showing them the game’s graphics superimposed on the real world. But the game depends on highly responsive wireless connections for all the phones, connections unavailable on today’s 4G LTE networks.

“We’re really pushing the boundaries of what we can do on today’s networks,” Hanke said. “We need 5G to deliver the kinds of experiences that we are imagining.”

Here is the video, it's very interesting and definitely worth a watch. For those who may not know, Niantic spun out of Google in October 2015 soon after Google's announcement of its restructuring as Alphabet Inc. During the spinout, Niantic announced that Google, Nintendo, and The Pokémon Company would invest up to $30 million in Series-A funding.



So what do you think, can AR / MR be the killer App 5G needs?

Tuesday, February 12, 2019

Prof. Andy Sutton: 5G Radio Access Network Architecture Evolution - Jan 2019


Prof. Andy Sutton delivered his annual IET talk last month which was held the 6th Annual 5G conference. You can watch the videos for that event here (not all have been uploaded at the time of writing this post). His talks have always been very popular on this blog with the last year talk being 2nd most popular while the one in 2017 was the most popular one. Thanks also to IET for hosting this annual event and IET Tv for making this videos available for free.

The slides and video is embedded below but for new starters, before jumping to this, you may want to check out about 5G Network Architecture options in our tutorial here.




As always, this is full of useful information with insight into how BT/EE is thinking about deploying 5G in UK.

Related Posts:

Tuesday, May 1, 2018

MAMS (Multi Access Management Services) at MEC integrating LTE and Wi-Fi networks

Came across Multi Access Management Services (MAMS) a few times recently so here is a quick short post on the topic. At present MAMS is under review in IETF and is being supported by Nokia, Intel, Broadcom, Huawei, AT&T, KT.

I heard about MAMS for the first time at a Small Cell Forum event in Mumbai, slides are here for this particular presentation from Nokia.

As you can see from the slide above, MAMS can optimise inter-working of different access domains, particularly at the Edge. A recent presentation from Nokia (here) on this topic provides much more detailed insight.

From the presentation:

        MAMS (Multi Access Management Services) is a framework for

-            Integrating different access network domains based on user plane (e.g. IP layer) interworking,

-            with ability to select access and core network paths independently

-            and user plane treatment based on traffic types

-            that can dynamically adapt to changing network conditions

-            based on negotiation between client and network
        The technical content is available as the following drafts*



-            MAMS User Plane Specification: https://tools.ietf.org/html/draft-zhu-intarea-mams-user-protocol-02




*Currently under review, Co-authors: Nokia, Intel, Broadcom, Huawei, AT&T, KT,

The slides provide much more details, including the different use cases (pic below) for integrating LTE and Wi-Fi at the Edge.


Here are the references for anyone wishing to look at this in more detail:

Sunday, January 22, 2017

Augmented / Virtual Reality Requirements for 5G


Ever wondered whether 5G would be good enough for Augmented and Virtual Reality or will we need to wait for 6G? Some researchers are trying to identify the AR / VR requirements, challenges from a mobile network point of view and possible options to solve these challenges. They have recently published a research paper on this topic.

Here is a summary of some of the interesting things I found in this paper:

  • Humans process nearly 5.2 gigabits per second of sound and light.
  • Without moving the head, our eyes can mechanically shift across a field of view of at least 150 degrees horizontally (i.e., 30:000 pixels) and 120 degrees vertically (i.e., 24:000 pixels).
  • The human eye can perceive much faster motion (150 frames per second). For sports, games, science and other high-speed immersive experiences, video rates of 60 or even 120 frames per second are needed to avoid motion blur and disorientation.
  • 5.2 gigabits per second of network throughput (if not more) is needed.
  • At today’s 4K resolution, 30 frames per second and 24 bits per pixel, and using a 300 : 1 compression ratio, yields 300 megabits per second of imagery. That is more than 10x the typical requirement for a high-quality 4K movie experience.
  • 5G network architectures are being designed to move the post-processing at the network edge so that processors at the edge and the client display devices (VR goggles, smart TVs, tablets and phones) carry out advanced image processing to stitch camera feeds into dramatic effects.
  • In order to tackle these grand challenges, the 5G network architecture (radio access network (RAN), Edge and Core) will need to be much smarter than ever before by adaptively and dynamically making use of concepts such as software defined networking (SDN), network function virtualization (NFV) and network slicing, to mention a few facilitating a more flexible allocating resources (resource blocks (RBs), access point, storage, memory, computing, etc.) to meet these demands.
  • Immersive technology will require massive improvements in terms of bandwidth, latency and reliablility. Current remotereality prototype requires 100-to-200Mbps for a one-way immersive experience. While MirrorSys uses a single 8K, estimates about photo-realistic VR will require two 16K x 16K screens (one to each eye).
  • Latency is the other big issue in addition to reliability. With an augmented reality headset, for example, real-life visual and auditory information has to be taken in through the camera and sent to the fog/cloud for processing, with digital information sent back to be precisely overlaid onto the real-world environment, and all this has to happen in less time than it takes for humans to start noticing lag (no more than 13ms). Factoring in the much needed high reliability criteria on top of these bandwidth and delay requirements clearly indicates the need for interactions between several research disciplines.


These key research directions and scientific challenges are summarized in Fig. 3 (above), and discussed in the paper. I advice you to read it here.

Related posts:

Saturday, November 21, 2015

'Mobile Edge Computing' (MEC) or 'Fog Computing' (fogging) and 5G & IoT


Picture Source: Cisco

The clouds are up in the sky whereas the fog is low, on the ground. This is how Fog Computing is referred to as opposed to the cloud. Fog sits at the edge (that is why edge computing) to reduce the latency and do an initial level of processing thereby reducing the amount of information that needs to be exchanged with the cloud.

The same paradigm is being used in case of 5G to refer to edge computing, which is required when we are referring to 1ms latency in certain cases.

As this whitepaper from Ovum & Eblink explains:

Mobile Edge Computing (MEC): Where new processing capabilities are introduced in the base station for new applications, with a new split of functions and a new interface between the baseband unit (BBU) and the remote radio unit (RRU).
...
Mobile Edge Computing (MEC) is an ETSI initiative, where processing and storage capabilities are placed at the base station in order to create new application and service opportunities. This new initiative is called “fog computing” where computing, storage, and network capabilities are deployed nearer to the end user.

MEC contrasts with the centralization principles discussed above for C-RAN and Cloud RAN. Nevertheless, MEC deployments may be built upon existing C-RAN or Cloud RAN infrastructure and take advantage of the backhaul/fronthaul links that have been converted from legacy to these new centralized architectures.

MEC is a long-term initiative and may be deployed during or after 5G if it gains support in the 5G standardization process. Although it is in contrast to existing centralization efforts, Ovum expects that MEC could follow after Cloud RAN is deployed in large scale in advanced markets. Some operators may also skip Cloud RAN and migrate from C-RAN to MEC directly, but MEC is also likely to require the structural enhancements that C-RAN and Cloud RAN will introduce into the mobile network.

The biggest challenge facing MEC in the current state of the market is its very high costs and questionable new service/revenue opportunities. Moreover, several operators are looking to invest in C-RAN and Cloud RAN in the near future, which may require significant investment to maintain a healthy network and traffic growth. In a way, MEC is counter to the centralization principle of Centralized/Cloud RAN and Ovum expects it will only come into play when localized applications are perceived as revenue opportunities.

And similarly this Interdigital presentation explains:

Extends cloud computing and services to the edge of the network and into devices. Similar to cloud, fog provides network, compute, storage (caching) and services to end users. The distinguishing feature of Fog reduces latency & improves QoS resulting in a superior user experience

Here is a small summary of the patents with IoT and Fog Computing that has been flied.