Back to Blogs

The case for prioritising efficiency in data centre workloads just gets stronger and stronger all the time. 

There has been widespread recognition for some time now that the massive energy consumption needed to keep the world’s data centres running is a major contributor to carbon emissions. As digitisation accelerates exponentially, global data production and use is expected to double between now and 2025.

All of that data – the sum of more internet users, more connected devices, more cloud-based systems, more compute-heavy technologies like AI, AR/VR, the metaverse and so on – has to be processed and stored somewhere. That means demand for data centre resources is also growing exponentially. And one of those resources is electricity.


From an environmental perspective, as the bulk of the world’s electricity still comes from burning fossil fuels, such massive growth in demand for data centres inevitably means higher carbon emissions. But as energy prices soar around the world, greater electricity consumption also means higher costs. Data centre operators are in a race to keep up with demand. But they also need to keep one eye on those spiralling costs.

It’s thought that data centres currently account for 1% of global energy consumption. This has actually stayed more or less stable since 2010, despite huge growth in data centre use. That’s because data centre operators have worked tirelessly to improve energy efficiency throughout that period. Their efforts have in effect kept pace with growth in demand so net energy consumption has been unaffected.

But there are fears that the traditional methods used for making data centres more energy efficient – hyperscaling, building and infrastructure design, alternative cooling methods – are no longer enough. One EU study has concluded that data centres will eat up 3.7% of Europe’s electricity supply by 2030.

So what are the options left for further improving data centre efficiency at a scale that will keep up with such enormous increases in data centre usage? It’s here that all eyes turn to server hardware.

CPUs, storage… and both

The world of computer science has long been battling against the recognition that there are physical limits to how fast and efficient you can make computer chipsets (more correctly known as computer processing units, or CPUs). The way you make a CPU more efficient is to pack it with more transistors. But that raises the issue of size and space, so the focus is on making transistors smaller and smaller.

Transistors just a nanometer (1nm) in width, or one billionth of a metre, have already been created in research labs, while the smallest microprocessors in mainstream production use transistors around 40nm in width. But the smaller and smaller you go, the more difficult and more expensive production comes. Eventually current technology hits a wall where it can’t go any further. Which also means you can’t make chips any more efficient.

Chips are not the only focus for improving the overall efficiency of a server, however. Typically, less than 10% of the data held in a data centre is ‘active’ at any given time, meaning it is being used by CPUs. The rest is held in storage.

It has long been known that solid-state drives (SSDs) are much more efficient than conventional hard disk drives (HDDs), using 70% less power for the same capacity. Traditionally, data centre operators have been reluctant to transition over fully to SSDs because HDD storage is much cheaper. But with energy prices rising so sharply, reducing power consumption by using SSDs increasingly makes economic as well as environmental sense.

Utilisation is another area where clear efficiency gains can be made. It’s thought that between 5% and 10% of all server resources in a typical data centre remain active but unused, consuming energy without contributing anything to performance. The answer here is software orchestration, both to identify underutilised resources and automatically switch off power and to optimise resource allocation across all available servers. 

Finally, a more future-facing solution to improving data centre hardware efficiency is to rethink our conventional models of processing and data handling from scratch with a view to making them leaner and less power hungry. An example of this in action is a project at University College London (UCL) to create hardware that combines processing and storage in a single unit – an approach inspired by the way our brains work.

The idea is that this will cut down on the need to move data between processing and storage units, which consumes a lot of energy. The research team behind the project believe servers built on this kind of chip model could be up to 100,000 times more energy efficient than current chips.

Continue reading