Data centres are the backbone of today’s digital world, providing the necessary infrastructure for organisations to store, process, and manage vast amounts of data. As a CTO, optimising the performance and reducing costs of your data centre can have a significant impact on your organisation’s bottom line. In this blog post, we will explore proven strategies that can help you unlock the full potential of your data centre, maximise efficiency, and ensure a competitive edge in the ever-evolving digital landscape.

  1. Implementing Energy-Efficient Cooling Systems

One of the primary sources of data centre operating costs is cooling. Traditional cooling systems often consume large amounts of energy, increasing your organisation’s carbon footprint and energy costs. By implementing energy-efficient cooling systems like liquid cooling, hot aisle/cold aisle containment, and free air cooling, you can reduce energy consumption and improve overall data centre efficiency.

  1. Regularly Monitoring and Analyzing Power Usage

Data centre energy efficiency is directly linked to power usage. Regularly monitoring and analysing power usage data can provide valuable insights into potential inefficiencies and help you identify opportunities for optimisation. By implementing a data-driven approach to power management, you can optimise power distribution, reduce energy costs, and improve the overall performance of your data centre.

  1. Virtualisation and Consolidation

Server virtualisation is a proven method for increasing data centre efficiency by allowing multiple applications to run on a single physical server. By consolidating servers and using virtualisation technologies, you can reduce your hardware footprint, decrease power and cooling requirements, and minimise the total cost of ownership.

  1. Investing in Energy-Efficient Hardware

Energy-efficient hardware can play a crucial role in optimizing your data centre’s performance. By investing in modern, energy-efficient servers, storage systems, and networking equipment, you can significantly reduce energy consumption, minimize operating costs, and maximize the overall efficiency of your data centre.

  1. Adopting Automation and AI-driven Solutions

Automation and AI-driven solutions can help streamline data centre operations, increase uptime, and improve overall efficiency. By automating routine tasks such as server provisioning, load balancing, and power management, you can reduce human error and ensure a more efficient use of resources. Additionally, AI-driven solutions can provide real-time insights into your data centre’s performance, enabling you to make data-driven decisions for optimisation.

  1. Regular Maintenance and Upgrades

Regular maintenance and timely upgrades are essential for ensuring the long-term efficiency and reliability of your data centre. By implementing a proactive maintenance schedule, you can identify potential issues before they escalate into costly problems. Additionally, upgrading to the latest hardware and software technologies can provide significant improvements in performance, energy efficiency, and security.


Optimising your data centre’s efficiency is an ongoing process that requires a comprehensive approach and constant monitoring. By implementing these proven strategies, you can unlock the full potential of your data centre, reduce costs, and ensure a competitive edge in today’s digital landscape. As a CTO, investing in data centre optimisation not only contributes to the success of your organisation but also showcases your expertise in this critical area.

Definitions of the Internet of Things (IoT) often focus on connectivity and the fact that it involves adding isolated electronic devices to large-scale networks – hence the ‘internet’ in the name.

But whether we’re talking about ‘smart’ heating systems or kitchen appliances in our homes, networked traffic control systems or the advanced plant machinery found in a modern ‘smart factory’, the IoT is about more than simple connectivity.

Fundamentally, it the IoT about the introduction of computing capabilities to devices that traditionally fall outside the realm of IT. It is also about the creation, exchange and processing of data. Data is the fuel that drives the level of real-time intelligence and automated decision making that we now habitually refer to using the ‘smart’ tag.

In the world of business, in industry in particular, the rise of IoT has set the worlds of operational technology (OT) and IT on a collision course. The basic ‘Industry 4.0’ model of adding masses of sensors to operational devices and using the data they generate to optimise processes sounds deceptively simple.

In reality, generating all of that data from ‘smart’ production systems raises some practical problems.

Different paths

For one, the volumes of data we’re talking about are staggering and are only going to grow as the number of industrial IoT connections more than double over the next five years. The data generated by industrial machines comes in a raw, unprocessed form. Some of it is useful, a lot of it is not.

To sift out the data that can really make a difference, it all needs processing and sorting. IoT devices themselves are built to generate data, not process it. That’s where the amalgamation with IT comes in.

The problem here is that OT and IT have evolved in parallel along quite different trajectories. Plant machinery and computers are designed to use different types of data in different ways, so the first challenge is having a suitable means of translation that makes data mutually intelligible across the OT/IT divide.

Second, there is the question of where processing and interpretation of OT-generated data takes place. The traditional location for large-scale data processing is the data centre.

But even with all the agility the Cloud offers, the act of transiting massive volumes of data from the factory floor to the data centre, processing and analysing it, and then shipping it back to the frontline where machinery can use it to make operational decisions creates undesirable lag.

For industrial IoT to deliver on the efficiency gains and operational agility it promises, there needs to be a real-time information exchange that sees data generated by plant machinery turned around into actionable intelligence almost immediately. And that represents one of the fundamental challenges of effectively integrating the worlds of IT and OT.

Solutions at the Edge

There are various models and metaphors used to think around this issue of OT-IT interoperability. One of them is to picture the divide between all of the data-generating and action-taking ‘smart’ devices on one side and the various applications and platforms that interpret and manage that data on the other as the ‘edge’ between the OT and IT worlds.

 Solutions that mediate across this divide – which convert raw data into a form that the IT applications can use, and which in turn feedback actionable intelligence to the OT systems – are therefore known as ‘edge computing’. An example is the EdgeX Foundry, an open source middleware software framework that seeks to standardise the flow of data across the ‘edge’ between IoT devices and cloud and enterprise applications.

The edge where OT and IT meet is more than just a metaphor. It is also a physical location, the actual point in time and space where data passes from device to computer system. It is conventional to talk about the ‘edge’ as being at the network edge, i.e. at the limits of the reach of the modern IT network. But we can just as easily turn that around and say that the edge is where OT functionality stops and IT takes over.

With this latter definition of the edge, we can see immediately that, in industrial IoT settings, the edge is physically located on the factory floor, hard up against the massed banks of sensors. The concept of moving IT capabilities here brings us back to the need for fast, efficient, real-time data exchange.

Blurring the lines

Edge computing therefore represents a new kind of IT architecture, one designed with the needs of OT compatibility in mind. Instead of the conventional centralised data centre model, with all the benefits of scale that brings, edge computing marks a radical decentralising of compute and processing capabilities, pushing them back out to the vicinity of networked devices.

Edge computing can therefore be viewed as an IT add-on for modern operational systems. Rather than transporting huge volumes of data long distances to remote data centres for processing and interpretation, edge solutions can collect, filter, analyse and apply AI functions in real time as the data is generated. This serves to make the use of data in ‘smart’ systems more effective and efficient, as well as lowering data handling costs – an important consideration when data volumes are increasing so rapidly.

Long term, it is likely that we will see the ‘edge’ creep outwards further still, as far as OT devices themselves. To date, the need for edge solutions to exist as an independent, intermediate layer between OT and IT stacks has been driven partly by the number of legacy systems that businesses want to include in their digital transformation plans, and partly by cost considerations. Sensors, actuators and other devices that generate data are simpler and cheaper to mass produce than more sophisticated pieces of hardware that can also process data.

But we’re already seeing a shift towards OT devices being manufactured with in-built IT capabilities, taking over at least part of the data processing and analysis themselves. In this sense, OT/IT convergence will be complete and there will eventually cease to be a meaningful distinction between them. As a result, edge computing will become a de facto reality in any IoT set up.

Global digitisation is on a collision course with the increasingly urgent need to tackle climate change.

For all the sound arguments that technology will be key to a greener future, there is at present one big problem – digital technology is a huge consumer of energy, and while we continue to rely on burning fossil fuels as our primary power source, increased digitisation runs counter to all efforts to reduce carbon emissions and protect the environment.

Nowhere is this more starkly illustrated than in data centres. Worldwide, data centres now form the backbone of our sophisticated IT networks and infrastructure. Offering computing resources on an industrial scale, the massed ranks of servers and processors that make up a typical data centre, plus all the cooling systems required to prevent them from overheating, consume vast quantities of energy.

It is estimated that data centres already use upwards of 400 terawatts of electricity annually, which is not far off 3% of total global power production. The way that demand for data centre capacity is growing apace, accelerated in part by the COVID-19 pandemic and the new impetus that has given to cloud migration, it is believed that data centres could soon be swallowing up 8% of the world’s electricity capacity.

The data centre industry finds itself in a difficult predicament. The problem, of course, is not that power consumption per se is bad for the environment. It’s the fact that the world is taking so much time to wean itself off fossil fuels and replace them with greener, renewable sources of energy that is the real issue.

Data centre operators might be tempted therefore to shrug and say, it’s not our problem – people want their computers and smartphones, their internet and their cloud-based systems. It all has to be powered somehow. But the environment has become a sensitive subject when it comes to brand reputation. For the hyperscale data centre operators like Amazon, Google, Facebook and Microsoft who also have enormous consumer-facing interests, being seen to do nothing to address the environmental impact of their operations is not an option.

But nor is slowing down the demands of digital progress.

Targeting zero carbon

There are no shortage of initiatives aimed at powering data centres in greener, more sustainable ways. These range from locating data centres in cold regions (or even in cold water) to make use of non-mechanical free cooling technology, to the major carbon offsetting initiatives run by Google and Microsoft which see them match the power they actually use with purchases of electricity from renewable sources around the globe, bringing millions of gigawatts of clean electricity onto the grid every year.

But as Google itself acknowledges, this is not the same as making sure that all the energy it uses itself is from renewable sources – a target it has set itself to achieve by 2030. When it comes to carbon emissions, net zero is not as good as zero. But how can data centre operators realistically achieve this, especially in regions of the world where access to renewable power sources remains problematic?

One technology that has been gaining a lot of attention recently is hydrogen fuel cells. In 2020, Microsoft ran a successful trial in which it powered a server bank for 48 hours entirely from hydrogen cells, and has now outlined a plan to use them to replace diesel generator back-up power systems at its data centres.

It’s certainly a step in the right direction – diesel generators are notoriously bad polluters, churning out toxic fumes and particulate matter that endanger health as well as carbon emissions. Hydrogen fuel cells, on the other hand, generate electricity with water and heat the only by-products.

Looking forward, there is a lot of excitement about the potential applications of hydrogen fuel cell technology, ranging from electric cars to powering individual buildings with their own cell banks. However, at present, the limited amount of power individual fuel cells are capable of producing at present, and the need therefore to stack cells in large arrays, make even these ambitions some way off, never mind the massive amounts of power needed to run a data centre.

However, with Microsoft aiming to go one better than Google and be carbon negative by 2030 – i.e. taking more CO2 from the atmosphere than it actually uses – there is a big drive within the company to take the lead on finding alternative green power sources for data centres. With research funded through co-founder Bill Gates’ multi-million dollar environmental interests, if anyone is going to upgrade hydrogen fuel cell technology to the point where it can run a data centre scale UPS, don’t bet against it being Microsoft.

The COVID-19 pandemic will go down in history as one of the greatest moments of disruption to human society. But from a business perspective at least, coping with disruption and uncertainty is nothing new.

Yes, the pandemic has shaken things up on an unprecedented scale. But in a report commissioned by PwC into the economic impact of global crises back in 2019, seven out of 10 companies surveyed said they had experienced at least one major disruptive event in the past five years, be that a cyber attack, a natural disaster, or the fall out from social unrest of some sort.

What is more, 95% of firms said they expected to experience major disturbances to their operations in the future – and that was before anyone had even heard of COVID-19.

As a result, resilience has become a big topic in operational planning and management. How do you keep supply chains functioning if transport links are suddenly cut or one particular link in the chain is broken, such as happens when a supplier goes out of business? How do you keep revenue streams flowing in volatile markets that are liable to change with little warning and at pace?

And something that the COVID pandemic has certainly brought into sharper focus – how do you maintain a robust digital infrastructure that can cope with trends like remote working, increased cloud adoption and omnichannel commerce?

The need for reliable digital infrastructure has brought the issue of network resilience to renewed prominence. Network technology underpins many of the things seen as critical to ensuring business resilience going forward, such as dispersed teams and agile operational structures. Network resilience therefore in itself becomes a critical business asset.

Advanced network resilience

Cisco recognises this by dedicating its 2021 Global Networking Trends Report entirely to the topic of network resilience. To quote from the report: “As the sole platform that binds, protects, and enables an increasingly dynamic and distributed set of users and devices and increasingly disaggregated and dispersed applications and workloads, the network plays a central role in helping organizations build their resilience.”

Because of this increasing importance, Cisco argues that straightforward continuity and uptime are no longer good enough as measures of a strong, reliable network. It offers five trends which it sees as leading to ‘advanced network resilience’, covering remote security, automated management and recovery, and AI-powered analytics for real-time intelligence.

Another trend identified by Cisco as helping to take network resilience to the next level is multicloud. The incorporation of multiple cloud services into a single IT architecture is commonly associated with businesses seeking greater choice and agility from their cloud-based assets, picking and choosing the best services for individual functions rather than running everything through a single provider.

But another benefit of a multicloud strategy is that it allows businesses to distribute digital workloads and data storage across multiple providers. This boosts resilience in the same way that data centres will distribute workloads across multiple physical servers, server clusters or even geographical locations – if one node goes down, there are still plenty of others functioning to avoid or at least minimise downtime.

Multicloud networking

Adopting a multicloud strategy adds a level of complexity, of course. Rather than running an entire IT environment within a single cloud, a multicloud architecture creates a web of different services that have to be linked somehow. How exactly do you connect these different nodes and manage things like workload, access and security consistently across them all in line with operational requirements and business priorities?

This is where Cisco argues the concept of cloud networking comes into its own – an emerging discipline that focuses specifically on how to connect different cloud services into single heterogeneous architectures.

Cloud networking is an interdisciplinary approach that combines expertise across network, infrastructure and cloud, as well as application development and security. It borrows from development approaches like DevOps and microservices to create control plane solutions that can deploy, manage and scale workploads, applications and infrastructure across multiple environments as required, automating a critical part of delivering enhanced resilience and continuity.

It is also an approach that makes use of cutting edge networking technologies line SD-WAN and SASE to deliver optimised access, authentication and policy automation across environments, ensuring consistency in performance and data security whether workloads are being run via SaaS, PaaS or IaaS, in a public or private cloud or on premise.

In summary, a multicloud approach in many ways mimics in IT delivery what enterprises are doing to create more resilient supply chains in their physical operations – diversifying suppliers, mitigating risk by spreading the load. Reliance on a single cloud provider turns into a network of service relationships, so if there is an issue with one, another can step up to the plate and keep the wheels turning.

It’s a more complex as well as a more dynamic way of managing digital infrastructure. But with disciplines like cloud networking evolving, multicloud architectures can play an important role in making businesses more resilient in the face of future uncertainty.

Ever since Mark Zuckerberg announced that Facebook, Inc. was being rebranded as Meta Platforms, Inc., the phrase ‘metaverse’ has been everywhere. The fact that one of the world’s most prominent tech tycoons has so publicly thrown his considerable financial weight behind an idea that quite frankly sounds like science fiction has certainly made the world sit up and take notice.

What exactly is the metaverse? It’s the vision of a shared, global virtual reality, a new physical iteration of the internet where we won’t just be connected via web pages and messaging and audio and video streaming. We’ll actually be able to walk, talk and interact in digital 3D spaces. 

A good explanation is offered in this article. It describes the current state of VR as a multiverse – lots of isolated, separate experiences through computer games like Fortnite and the emerging generation of VR meeting and collaboration spaces like Spatial and Meta’s own Horizon Rooms.

The metaverse is all about breaking down the barriers between all these emerging VR platforms. Connecting all the islands to create one big, fully integrated alternate reality.

Why all the excitement about the metaverse?

The big vision is that the metaverse will create a brand new digital economy that will transcend what’s been achieved in the past 30 years with the first and second generations of the internet. Providing heightened, more tangible, more immersive experiences will be key.

Get frustrated shopping on your mobile having to fiddle about with small images? In the metaverse, with your VR headset on, you (or your avatar) will be able to walk into a virtual shop and try things out. Feel here’s something missing chatting to friends via live text chat or video? In the future, you will be able to meet up in the virtual universe any time you like, wherever they happen to be in the real world.

With remote working now a well-established trend, there is a lot of excitement about how the metaverse could take remote collaboration to new levels. Nevermind Slack, Teams, Zoom and all the rest. How about stepping into a virtual office where you can discuss, share, plan and interact exactly as you would in a real office? 

If all of this sounds fanciful, then the caveat has to be made that there are a fair few technological barriers to break through before the vision of a global VR internet becomes a reality. But given the pace of technological progress, you would perhaps not be wise to bet against viable solutions emerging in the next decade. Especially as many of the main challenges have already been identified.

Next-generation connectivity

Take connectivity, for example. As things stand, network speeds and latency are proving problematic for the further evolution of VR applications. Immersive 3D video games, for example, are huge pieces of software that require considerable computing power. Making them accessible via the cloud, there are currently limitations as to how big, how graphically detailed, how realistic you can make them because of transport speeds.

When you are talking about scaling that up to a global network of realistic digital 3D spaces, that’s obviously a major challenge. The grand vision comes tumbling down if the alternate reality is glitchy and as slow as an old dial-up connection. 

So park your plans to start building connected VR apps until network technology catches up, then? You might not have to wait as long as you think.

We’re only just starting to see the full rollout of 5G, with the promise that it will usher in a new era of full wireless connectivity yet to be realised. Even so, attention is already turning to 6G and the next steps forward that would represent.

And what is at the heart of current conceptual thinking around 6G? That’s right, ultra-fast, minimal latency hyperconnectivity at a global scale. The kind where concerns over the number of devices or the size of applications just disappears, because capacity is a couple of orders of magnitude above anything we’ve yet conceived.

To give an example, it’s thought that 6G technology could achieve connection speeds of up to one terabit per second – 1000 times faster than the current gigabit gold standard. 

That’s the kind of connectivity on which something as massive as the metaverse could be built. And if you believe the current forecasts, we’ll all be living in a 6G world by the middle of the next decade.

If you were looking for some kind of popular culture metaphor to describe the trials and tribulations of cybersecurity, you could do worse than turn to the Star Wars franchise.

While digital technology is undoubtedly a Force for good in our modern world, there is equally no doubt that it has its Dark Side, too. The more digital tech evolves, the more powerful it gets, the more it also fuels the ambitions of the cybercriminals and hostile actors who wish to use it for ill.

Locked in a seemingly eternal struggle for supremacy between good and evil, the further down the digital road we go, the more the threat from cybercrime grows. And so we look to cybersecurity as our only hope.

As the beating heart of the cloud-based architectures that now dominate IT, data centres find themselves right on the front line of this galactic battle. Data centre infrastructure and technology continues to evolve at warp speed.

But as it does so, the protections offered by existing cybersecurity protocols get stretched. Gaps open up that rogue digital actors seem able to pounce upon at will. The result is that data centre security is in a constant state of flux, continuously trying to anticipate how the next leap forward might leave it vulnerable before the hackers do.

Why the threats keep growing

There are a number of reasons why rapid progress in data centre technology creates gaps in cybersecurity defences like this. One is that the landscape for potential attacks just gets bigger and bigger.

We’re connecting more and more devices all the time. We’re pushing more and more compute capabilities out of the data centre to the network edge to achieve better processing speeds, lower latency and bandwidth relief. But to use another analogy, that’s like taking the garrison out of a big secure castle, splitting it up and scattering it to dozens of smaller forts way out in the hinterland. The potential targets for attack multiply many times over.

We’re also operating in a much more complex, communal digital ecosystem these days. Everything is open source, SaaS, outsourced. The average company now uses an astonishing 80 different SaaS applications. That’s a lot of data and functionality being trusted to third parties.

When you rely on so many partnerships, you simply cannot throw a company firewall around your IT infrastructure the way you could when everything was run on premise or via an all-encompassing private cloud. You put your trust in a lot of different people. And inevitably someone, somewhere leaves a tiny little chink that can end up being exploited.

The race for zero trust

That is why one of the biggest trends in cybersecurity is so-called zero trust – a model that accepts that the traditional network edge, like the safe mediaeval castle from times of yore, is a thing of the past.

Designed for distributed IT architectures, remote working and complex collaborative ecosystems, zero trust makes authentication and authorisation a first principle of system access. You cannot get past the encryption controls without proving who you are and may be challenged for revalidation at any time. It’s the digital equivalent of being asked to prove your identity every time to step onto the street (and every street thereafter).

But in data centres, imposing zero trust security systems on legacy technology stacks that were not designed for them is a highly challenging task. To put it in simple terms, it’s complex and it costs a lot of money. So there is a big lag in data centres catching up with what is emerging as the gold standard for distributed authentication control. And that leaves data centres vulnerable.

Tech-based solutions will emerge and are already emerging. Nvidia, for example, is putting programmable security at the heart of its new generation of ‘data centre infrastructure-on-a- chip’ DPU solutions. Nvidia’s latest DPU’s support VMware’s Project Monterey, a re-engineering of its flagship hybrid cloud platform designed to deliver zero trust security.

Changing the culture

There are also changes data centre operators can make at the level of operational procedures and culture to better bolster themselves against the expanding threat landscape.

Security strategies need to be holistic and end-to-end, based on a complete view of the broader ecosystem. So much of IT delivery is based on collaboration and complex supply chains these days, you can’t focus on one point or node in isolation, including the data centre. Cybersecurity needs to be a constant topic of conversation between partners, to ensure their approaches are aligned. Companies need to start demanding clear security performance metrics to get the guarantees they need.

As always, people matter just as much as technology. Countless threats stem from human error at all levels of organisations. Cybersecurity needs to be treated as a core skill requirement, with appropriate training given to all personnel and recruitment that prioritises cybersecurity expertise.

The clouds we operate in now and in the future are reaching galactic scales. And so the battle to secure them is reaching galactic proportions, too. Only by focusing resources collectively will the dark forces of cybercrime be kept at bay.

We’re living through yet another boom in cloud computing. Depending on how you want to categorise these things, it might be the third or fourth such period of rapid expansion in cloud adoption.

When we look at the figures, this current cycle even puts previous cloud booms in the shade. According to one recent analysis, the global cloud computing market is set to grow by an incredible CAGR of 30.10% up until 2027. That’s an increase in cloud spending of nearly a third a year, over a five year period.

If that forecast is right, by 2027 the global cloud industry will be worth a mammoth $750bn.

Post-pandemic peak

So what is driving this new era of huge cloud investment and expansion? Well to start with, we can’t ignore the impact of the P-word. Funnily enough, spending on cloud actually took a dip in the second quarter of 2021 as most countries just started to open up from the deepest lockdowns of the whole pandemic.

But then it roared back in the final quarter of last year to end a massive 13.5% up year on year. The driver was the huge rush for organisations to invest in digital transformation projects. Over the course of the previous 18 months, businesses had had to adjust rapidly to remote working and running digitised sales channels and other operations at scale.

While COVID restrictions were in full swing, much of this had to be implemented via a make-good-and-do approach. But since restrictions started to ease in summer 2021, organisations have had the freedom to run with their digital transformation projects. Having seen how valuable the agility of the cloud could be in ensuring business continuity through a crisis, it’s understandable that businesses should rush to be ready next time.

Plus you could also say that the pandemic was something of a coming of age for cloud technology. Any lingering doubts that it was resilient enough, secure enough, reliable enough or had the capacity to support the global burden of trade, commerce and communication were laid to rest. At the same time, people realised remote working really could work, online spending soared as more and more people shopped online etc.

But isn’t what we’re seeing a relatively short term spike in cloud spending as organisations look to pivot to the digital first ‘new normal’ that has become the post-pandemic blueprint? Will it really last as long as five years?

A tipping point in IT spend

There are several things to unpick here. One is that, by the very nature of how the cloud works, for end users it isn’t a one-off capital investment. It’s an ongoing operational commitment, with delivery of cloud services defined by a subscription model (that famous ‘as-a-service’ phrase).

What we’re seeing is not just a sudden rush to invest in the cloud, but a fundamental reorientation of IT spend full stop. According to IDC, 2022 will mark the first year that spending on cloud IT infrastructure goes above non-cloud infrastructure. We’ve reached a tipping point where the cloud is now the dominant IT model. And that will of course drive growth.

At the same time, the way that organisations use cloud computing is becoming more and more complex and sophisticated all the time. It’s possibly no longer strictly accurate to talk about ‘the cloud’, because that suggests a single IT structure, possibly through a single vendor. Multi-clouds are by far and away the dominant model now used – some 89% of organisations use multiple cloud services from multiple vendors, and 80% use a hybrid blend of public and private clouds.

According to IDC, both shared/public and dedicated/private cloud infrastructure will experience double digit spending growth up to 2026. That’s a function of organisations looking to break down their IT requirements to a granular level and choose best-in-class providers to meet very specific needs.

The more cloud providers a business partners with, the more their cloud spend increases. Instead of networks of maybe a handful of cloud services, we’re now talking entire ecosystems comprising dozens. But that is against the background of spending on non-cloud IT going into decline.

Rapid, high-level growth in cloud doesn’t necessarily mean organisations are spending more on IT overall (although, as digital increasingly becomes a core business function, that is likely to happen, too). But what it does show is just how far business IT is moving into the cloud full stop.

There’s a widespread consensus that we are in the middle of a new phase of evolution in digital technology – a phase driven by Artificial Intelligence (AI).

75% of businesses expect AI to transform their organisation within three years; 61% believe it will completely transform their industry in the same time period. According to PwC, 52% of companies have accelerated their AI adoption plans in response to the COVID-19 pandemic.

The perceived benefits of AI are manifold and apply across the full spectrum of industries and business functions. AI leads to smarter decision making and better use of data. It bridges the divide between data-led intelligence and action, opening the door to sophisticated, ‘active’ automation. This can be used both to take on the burden of repetitive, labour-intensive but low-value tasks from people, and to perform complex tasks with a higher degree of accuracy and reliability than human beings are capable of.

All in all, it is estimated that AI could add $13 trillion to the global economy by 2030, or a 1.2% annual boost to GDP.

So where’s all the AI?

Yet the road to an AI-enabled future is not entirely without its bumps and obstacles. We can get a sense of this from the fact that, while businesses are betting on AI for the future, current use remains surprisingly low. Nine out of 10 of the world’s largest organisations have investments in AI, but just 15% actively use it at present.

So what’s holding people back? One issue that stands out is that AI development remains expensive and continues to have a high failure rate.

According to figures from Gartner, only around half of AI projects ever make it from prototype to production, while an eye-watering 85% of projects fail to deliver on their business objectives.

This is not exactly unusual in tech investment. From the bubble to the first cloud boom, we regularly see much-hyped technologies attract a frenzy of investments in projects that ultimately don’t go anywhere. Before the market in a new technology matures, there is a lot of jumping on the bandwagon rather than thinking strategically about how it can best be used.

But the high failure rate in AI projects does suggest another, more fundamental issue. And that’s that AI development is complex and technically challenging. Plug-and-play AI tools that integrate seamlessly with your existing IT assets and deliver all the benefits of AI out of the box do exist. But they tend to be at the more ‘lightweight’ end of the AI spectrum – tools like chatbots, sales and marketing automation etc.

 Once you get into the realm of Deep Neural Networks, self-driving vehicles, autonomous medical robotics and the like, AI becomes a whole different beast. The more complicated the task, the more precise you need the AI system to be (e.g. to drive a car safely or perform complex surgery, rather than just target an advert at someone), the more demanding the task of programming the required algorithms.

Not only that, the most powerful AI platforms simply cannot exist as off-the-shelf pieces of code. They need an entire infrastructure to support them – a cloud back end with an appropriate amount of processing and network capacity, APIs to allow them to work within your broader IT infrastructure. Many of these things have to be custom built on a project by project basis.

Can AI solve its own development challenges?

All of this adds cost to the AI development process. And as well as sheer complexity, there are other factors that contribute to the high failure rate, such as a shortage of developers with specialist AI programming skills and a patchy approach to models and standardisation.

If you use the analogy of building a car, many AI projects can feel like trying to build the entire thing from scratch from raw materials. What makes production more cost effective, efficient and allows you to scale is having a set of ready-made components available that you simply have to bolt together.

As is so often the case with technology, AI may well end up being the answer to the development challenges it creates. We’re starting to see Machine Learning tools applied to some of the more demanding aspects of AI development.

For example, Galileo is a platform that monitors the development process to highlight potential issues in how the AI application will actually work. It focuses in particular on data modelling, aiming to make the complex and laborious task of data ‘training’ more streamlined and efficient.

It’s not quite AI creating AI, or some dystopian future vision of machines self-replicating. But it does highlight how, if more businesses are to benefit from AI in the near future, AI’s powers perhaps need to be turned inwards to address the issue of cost and complexity in development.

with Darren Szukalski

​For the first episode of The WNTD podcast, we were joined by Darren Szukalski, Sales Director at 1823 Group, a company that ‘manages communications for the real world of work.’ They are a small but fully formed communications offering, providing all three main mobile networks, network connectivity, and unified communications to small and medium enterprises. 

Having developed a varied career working with leading corporations along the likes of Apple, AT&T, and Virgin Media, he shares his perspective on why all companies need to rethink their focus on their connectivity infrastructure.

Where do you see the telecommunications sector heading? 

​It depends on how you look at the market. Most of the time, headlines feature topics like AI, Cloud transformations, security is an increasingly hot topic, and a small company will have a different view of technology than a big company; they’re impacted by these developments in a different way and they’re viewing the tech world through a different lens. 

​A small organisation of ten people, for example, might be more interested in advances in how they can best collaborate through telephony and how they’re using some of their software and applications. Yet, a big organisation will all be talking about AI and ESG, because that’s relevant to them.

​In fact, ESG is a buzzword at the moment; it stands for Environmental Social Governance. Some of the biggest companies are really focused on becoming more environmentally friendly, both in their business processes, the infrastructure, how they travel, how they can be more carbon efficient. And the emphasis is really on the governance supporting the business activities they’re undertaking. Whereas that’s understandably, not a priority to the same extent in smaller businesses – they have a different set of challenges 

What would you say to organisations thinking about ways to improve their connectivity infrastructure? 

​We’re seeing a change in people’s habits. Direct outreach is becoming increasingly difficult because people are becoming harder to get hold of. A few years ago, office hours used to be our peak; salespeople worked to catch people when they knew they’d be in the office or travelling at certain times. That’s all been thrown up in the air in recent times with people not going to the office as much or doing so on flexible schedules. 

​So, it’s become especially important to present your brand and services well online through blogs, good reviews, referrals, or recommendations. Video is increasingly becoming more prevalent now that people are consuming on their mobile devices.

​Ultimately, technology exists to create efficiencies in the ways in which people work and collaborate. The Internet of Things, for example, another buzzword, involves trying to use mobile technology to improve what might otherwise have been a manual process; whether that be in manufacturing or travel, for example, technology should be used to make things more efficient. Every company ought to question: “How can technology make my life easier? How can we sell more? How can we improve the customer experience?”

​On that last point, when we talk about the role of technology within customer experience, it has to be coupled with an understanding that people still buy from people. It doesn’t matter what we buy, what we consume, it feels good when someone goes that extra bit further to ensure we’re without purchase.    

​You can listen to the full episode, here. 

The case for prioritising efficiency in data centre workloads just gets stronger and stronger all the time. 

There has been widespread recognition for some time now that the massive energy consumption needed to keep the world’s data centres running is a major contributor to carbon emissions. As digitisation accelerates exponentially, global data production and use is expected to double between now and 2025.

All of that data – the sum of more internet users, more connected devices, more cloud-based systems, more compute-heavy technologies like AI, AR/VR, the metaverse and so on – has to be processed and stored somewhere. That means demand for data centre resources is also growing exponentially. And one of those resources is electricity.


From an environmental perspective, as the bulk of the world’s electricity still comes from burning fossil fuels, such massive growth in demand for data centres inevitably means higher carbon emissions. But as energy prices soar around the world, greater electricity consumption also means higher costs. Data centre operators are in a race to keep up with demand. But they also need to keep one eye on those spiralling costs.

It’s thought that data centres currently account for 1% of global energy consumption. This has actually stayed more or less stable since 2010, despite huge growth in data centre use. That’s because data centre operators have worked tirelessly to improve energy efficiency throughout that period. Their efforts have in effect kept pace with growth in demand so net energy consumption has been unaffected.

But there are fears that the traditional methods used for making data centres more energy efficient – hyperscaling, building and infrastructure design, alternative cooling methods – are no longer enough. One EU study has concluded that data centres will eat up 3.7% of Europe’s electricity supply by 2030.

So what are the options left for further improving data centre efficiency at a scale that will keep up with such enormous increases in data centre usage? It’s here that all eyes turn to server hardware.

CPUs, storage… and both

The world of computer science has long been battling against the recognition that there are physical limits to how fast and efficient you can make computer chipsets (more correctly known as computer processing units, or CPUs). The way you make a CPU more efficient is to pack it with more transistors. But that raises the issue of size and space, so the focus is on making transistors smaller and smaller.

Transistors just a nanometer (1nm) in width, or one billionth of a metre, have already been created in research labs, while the smallest microprocessors in mainstream production use transistors around 40nm in width. But the smaller and smaller you go, the more difficult and more expensive production comes. Eventually current technology hits a wall where it can’t go any further. Which also means you can’t make chips any more efficient.

Chips are not the only focus for improving the overall efficiency of a server, however. Typically, less than 10% of the data held in a data centre is ‘active’ at any given time, meaning it is being used by CPUs. The rest is held in storage.

It has long been known that solid-state drives (SSDs) are much more efficient than conventional hard disk drives (HDDs), using 70% less power for the same capacity. Traditionally, data centre operators have been reluctant to transition over fully to SSDs because HDD storage is much cheaper. But with energy prices rising so sharply, reducing power consumption by using SSDs increasingly makes economic as well as environmental sense.

Utilisation is another area where clear efficiency gains can be made. It’s thought that between 5% and 10% of all server resources in a typical data centre remain active but unused, consuming energy without contributing anything to performance. The answer here is software orchestration, both to identify underutilised resources and automatically switch off power and to optimise resource allocation across all available servers. 

Finally, a more future-facing solution to improving data centre hardware efficiency is to rethink our conventional models of processing and data handling from scratch with a view to making them leaner and less power hungry. An example of this in action is a project at University College London (UCL) to create hardware that combines processing and storage in a single unit – an approach inspired by the way our brains work.

The idea is that this will cut down on the need to move data between processing and storage units, which consumes a lot of energy. The research team behind the project believe servers built on this kind of chip model could be up to 100,000 times more energy efficient than current chips.