AI is powering the future of energy — handle with care
- Thomas Helmer

- 49 minutes ago
- 4 min read
CS&A's resident expert in high-risk industries, Thomas Helmer, says that while AI is revolutionising the energy sector, adopting AI systems should be approached with caution, strategic planning, and rigorous oversight. He explains here.

It was impossible to escape AI at the Abu Dhabi International Petroleum Exhibition & Conference (ADIPEC) last November. Admittedly, AI is ubiquitous and difficult to avoid, but what was notable in this case was that AI was front and centre at the world’s largest energy event. As with other industries, AI is revolutionising the energy sector, transforming energy systems by boosting efficiency, facilitating the integration of renewables, and optimising electricity networks. At the same time, AI is driving growing electricity demand for data centres, raising questions about its impact on existing infrastructure and sustainability.
The oil and gas industry invested an estimated US$4 billion in AI in 2025 and is expected to reach nearly US$15 billion by 2035. Change is happening fast, with AI making significant strides over the last two years. As AI continues to transform our lives and the world of business, the pace of change is so rapid that corporate leaders are increasingly concerned that AI innovations are growing largely unchecked. In its survey of 1,300 leaders in government, business and other organisations, the World Economic Forum Global Risks Report 2026 found that adverse AI outcomes is the risk with the largest rise in ranking over time.
Energy is sensitive to extreme geopolitical, environmental, and economic shocks. For a sector already crisis-prone, AI is another item to add to the corporate risk register. It will command a response plan, the development of a crisis checklist and a crisis communications plan, all of which will provide a roadmap for when things get chaotic. While AI is transforming the sector and enhancing energy resilience, adopting AI systems should be approached with caution, strategic planning, and rigorous oversight.
The increasing digitalisation of energy infrastructure, especially in the growing renewable sector, has made it increasingly vulnerable to cybersecurity threats. Using AI-enabled tools creates a strong cyber-physical dependency, which also carries considerable risks. Legacy IT infrastructure, automation, cloud computing, and reliance on third-party vendors whose systems may not be secure make them even more susceptible. What happens when command-and-control links, cloud services, or AI systems fail or are compromised? You may lose critical capabilities when you need them most.
On the upside, AI tools are a powerful resource and can enable real-time threat detection, automate responses to incidents and enhance phishing defences. Conversely, bad actors can use AI to automate attacks and evade detection. As these threats evolve, it will be critical to adopt more proactive AI-enabled cybersecurity systems that respond quickly to ensure the resilience of the energy sector.
Agentic AI is the latest wave of artificial intelligence, taking systems to a new level of sophistication. An AI Agent does more than generate text or code. It can plan, reason and act with impressive autonomy thanks to advances in Machine Learning and increased computational power. AI agents are more dynamic, learning from individual interactions and evolving. In the process of analysing data from past interactions, they improve and adapt, enhancing productivity and reducing human error. Agents are in a position to redefine the energy industry by supporting sustainable practices, efficient energy distribution, and the integration of renewable sources. While agents have the potential to revolutionise the energy sector through autonomous decision-making, they also pose a critical governance crisis. In traditional AI governance, the emphasis is on outputs, but AI agent governance should consider the entire decision-making process, from autonomous reasoning through action execution to outcome accountability. AI agents are a shift from traditional AI and require new frameworks, capabilities, and approaches to AI safety that extend beyond technical performance and encompass business risk management.
As the agent learns a senior engineer’s click logic, that knowledge is encoded in shared software. If good governance is in place, this can help boost resilience. But what happens without proper safeguards and oversight? If the agent is misspecified, the documentation is scant, or no one remembers how to run the process manually in a crisis. Misspecification can lead to ongoing errors because the agent is interpreting data through an incorrect model, takes suboptimal actions, or acts on a flawed understanding or biased estimates, leading to incorrect inferences. And what of accountability when something goes wrong? Does it rest with the engineer who informs decision-making across the chain of agents, legacy software, and data assumptions?
Another issue is dependence on a specific AI vendor’s ecosystem, where an organisation's workflows, data, and decision-making processes are deeply integrated with a single AI model or platform. This type of cognitive lock-in results from reliance on proprietary tools, fine-tuned models, and embedded memory systems and can be risky, challenging and costly to change. Once an organisation standardises on one vendor’s AI agents and a single way of “seeing” things, alternative interpretations (for example, from regulators or NGOs) will be easier to dismiss as “unscientific,” even when they highlight real risks.
AI has become a core business model and strategic imperative in the energy sector, with systems offering solutions to some of the most pressing challenges, such as optimising smart grids and advancing renewable energy integration. However, critical infrastructure is increasingly reliant on a nascent AI ecosystem that still needs to establish trust, reliability, and governance. AI is a powerful tool which requires a robust foundation of policies, controls, and oversight mechanisms to ensure autonomous AI systems operate safely and align with human intent. The use of AI-enabled tools in a safety-critical environment where high-risk decisions are made should occur only when we can mitigate these risks through a framework that supports reliable artificial intelligence. Only then can we safeguard against the risks posed by AI.
Research support for this article was provided by Umaima Farhan and Anne Pappas.
Thomas Helmer is a Senior Director with CS&A International, a consultancy specialising in risk, crisis, and business continuity management. His expertise is in high-risk industries, including oil and gas, petrochemicals, aviation, shipping, and transport. He worked with Shell for many years in a range of roles that included assessing the effectiveness of risk management, business contingency planning and continuity, crisis management, and emergency response. Thomas has also worked for Nord Stream AG and Maersk Drilling, among other industry leaders.






Comments