Showing posts with label Datacentre. Show all posts
Showing posts with label Datacentre. Show all posts

Monday, 1 September 2025

Software Efficiency Matters as Much as Hardware for Sustainability

When we talk about making computing greener, the conversation often turns to hardware. Data centres have become far more efficient over the years. Power supply units that once wasted 40% of energy now operate above 90% efficiency. Cooling systems that once consumed several times the power of the servers themselves have been dramatically improved. The hardware people have delivered.

But as Bert Hubert argues in his talk “Save the world, write more efficient code”, software has been quietly undoing many of those gains. Software bloat has outpaced hardware improvements. What once required careful optimisation is now often solved by throwing more cloud resources at the problem. That keeps systems running, but at a significant energy cost.

The hidden footprint of sluggish software

Sluggish systems are not just an annoyance. Every loading spinner, every second a user waits, often means CPUs are running flat out somewhere in the chain. At scale, those wasted cycles add up to megawatthours of electricity. Studies suggest that servers are responsible for around 4% of global CO₂ emissions, on par with the entire aviation industry. That is not a small share, and it makes efficient software a climate issue.

Hubert points out that the difference between badlly written code, reasonable code, and highly optimised code can easily span a factor of 100 in computing requirements. He demonstrates this with a simple example: generating a histogram of Dutch house numbers from a dataset of 9.9 million addresses.

  • A naïve Python implementation took 12 seconds and consumed over 500 joules of energy per run.
  • A straightforward database query reduced this to around 20 joules.
  • Using DuckDB, a database optimised for analytics, the same task dropped to just 2.5 joules and completed in milliseconds.

The user experience also improved dramatically. What once required a long wait became effectively instantaneous.

From data centres to “data sheds”

The point is not just academic. If everyone aimed for higher software efficiency, Hubert suggests, many data centres could be shrunk to the size of a shed. Unlike hardware, where efficiency can be bought, software efficiency has to be designed and built. It requires time, effort and, crucially, management permission to prioritise performance over simply shipping features.

Netflix provides a striking example. Its custom Open Connect appliances deliver around 45,000 video streams at under 10 milliwatts per user. By investing heavily in efficiency, they proved that optimised software and hardware together can deliver enormous gains.

The cloud and client-side challenge

The shift to the cloud has created perverse incentives. In the past, if your code was inefficient, the servers would crash and force a rewrite. Now, organisations can simply spin up more cloud instances. That makes it too easy to ignore software waste and too tempting to pass the costs into ever-growing cloud bills. Those costs are not only financial, but also environmental.

On the client side, the problem is subtler but still real. While loading sluggish web apps may not burn as much power as a data centre, the sheer number of devices adds up. Hubert measured that opening LinkedIn on a desktop consumed around 45 joules. Scaled to hundreds of millions of users, even modest inefficiencies start to look like power plants.

Sometimes the situation is worse. Hubert found that simply leaving open.spotify.com running in a browser kept his machine burning an additional 45 watts continuously, due to a rogue worker thread. With hundreds of millions of users, that single design choice could represent hundreds of megawatts of wasted power globally.

Building greener software

The lesson is clear. Early sluggishness never goes away. If a system is slow with only a handful of users, it will be catastrophically wasteful at scale. The time to demand efficiency is at the start of a project.

There are also practical steps engineers and organisations can take:

  • Measure energy use during development, not just performance.
  • Audit client-side behaviour for long-lived applications.
  • Incentivise teams to improve efficiency, not just to ship quickly.
  • Treat large cloud bills as a proxy for emissions as well as costs.

As Hubert says, we may only be able to influence 4% of global energy use through software. But that is the same impact as the aviation industry. Hardware engineers have done their part. Now it is time for software engineers to step up.

You can watch Bert Hubert’s full talk below, where he shares both entertaining stories and sobering measurements that show why greener software is not only possible but urgently needed. The PDF of slides is here and his LinkedIn discussion here.

Related Posts:

Wednesday, 26 February 2025

Reigniting Growth in the Telecom Industry with AI and Cloud

The telecom industry is at a crossroads. While demand for connectivity continues to surge, operators face stagnating revenues, rising costs, and increasing competition. In his keynote at the Brooklyn 6G Summit 2024, Manish Singh, CTO of Telecom Systems Business at Dell Technologies, outlined a compelling vision for how AI and cloud-native networks can reignite growth in the sector.

The Growth Challenge in Telecom

The traditional telecom business model is under pressure. Operators are struggling with:

  • Revenue stagnation despite increasing data consumption.
  • Rising operational costs driven by legacy infrastructure and inefficient processes.
  • Intensifying competition from hyperscalers and alternative connectivity providers.

To overcome these challenges, Manish argues that telcos must embrace AI-native and cloud-native architectures as fundamental enablers of transformation.

AI: The Catalyst for Intelligent Networks

AI is not just an add-on; it must be at the core of future telecom networks. Manish highlighted several ways AI can drive growth:

  • Automation of network operations: AI-driven predictive maintenance and self-optimising networks reduce downtime and operational expenses.
  • Enhanced service delivery: AI enables hyper-personalised customer experiences and intelligent traffic management.
  • Operational efficiency: AI optimises energy consumption, spectrum allocation, and overall network resource utilisation.

Manish emphasised that AI-native networks will be a defining feature of 6G, making networks more autonomous, efficient, and scalable.

Cloud-native Architectures: The Foundation for Scalability

Moving beyond traditional, hardware-centric networks is essential. Manish advocates for a cloud-first approach, where telecom networks are:

  • Software-defined and virtualised, reducing dependence on costly proprietary hardware.
  • Highly scalable, allowing operators to adjust capacity dynamically.
  • Interoperable and open, fostering innovation through Open RAN and disaggregated networks.

By embracing cloud-native principles, telcos can accelerate service delivery, reduce costs, and stay competitive in an increasingly software-driven ecosystem.

AI Infrastructure: Scaling from Edge to Core

A key enabler of AI and cloud-native networks is the AI Factory approach, which provides scalable infrastructure from mega-scale data centres to the edge. Manish highlighted how AI workloads must be supported across different network layers—from on-premise enterprise deployments to far-edge, near-edge, and core data centres.

Dell Technologies' AI Factory is designed to:

  • Support diverse AI edge use cases in telecom.
  • Handle power and cooling constraints, crucial for efficient AI model training and inference.
  • Leverage cloud-native architectures to ensure seamless scalability and automation across the entire network.

This modular infrastructure ensures that telecom networks can efficiently process AI workloads at every layer, enabling real-time decision-making and optimised operations.

Overcoming Challenges in AI and Cloud Adoption

Despite the clear benefits, Manish acknowledged key barriers:

  • Legacy infrastructure: Transitioning from traditional networks requires significant investment.
  • Security and privacy concerns: AI-driven automation raises questions about data integrity and network security.
  • Industry mindset shift: Operators must adopt a culture of innovation and rapid iteration.

Addressing these challenges requires industry-wide collaboration, strong partnerships with cloud providers, and a commitment to open innovation.

Conclusion: The Time to Act is Now

Manish’s message to the industry was clear—AI and cloud are not future aspirations; they are essential for telecom survival and growth. By leveraging AI-native automation and cloud-native architectures, operators can reignite growth, drive efficiency, and prepare for the 6G era.

Watch Manish Singh’s full keynote embedded below:

Related Posts