Back to Research
February 3, 2025

Cybersecurity Tips for AI Buyers in Energy & Telco

Arun Misra is a Managing Director in Accenture’s Data & AI strategy practice, primarily serving clients in the power and energy industries. Arun is also Adjunct Professor at Boston University where he lectures in the business school on data and AI applications in industry.

Success for critical infrastructure providers (utilities, oil & gas, independent power producers, hyperscalers, fiber companies) has been historically measured by reliability, safety, and margin of error. The [new but overly discussed] era of increasing energy demand has appended – hurry up and deliver my electrons – to that list. 

Terawatts of data center development, historic public infrastructure investment and hungry private credit markets make the promises of silicon valley software companies offering “Generative AI agents” to “improve operational efficiency by a factor of 10” start to sound worthwhile. But before pushing all the chips in on the company AI strategy, ensure you have the right protocols in place. The questions to ask AI vendors are different from the software procurement of five years ago. Not to mention, the cybersecurity threat environment has evolved. Cyberattacks targeting US utilities spiked 70 percent in 2024 as compared to the previous year. Digitized controls, switches, and sensors are exponentially proliferating across our energy system. And the geopolitical situation with peer adversaries remains increasingly threatening. 

The operative question for AI buyers at energy, utilities, and telecommunications has become:

How do critical energy and telecommunications infrastructure providers leverage the modern AI stack responsibly, amidst an increasingly complex and dangerous threat environment?

We believe the best way to determine security and safety of AI tools for energy and infrastructure use cases is by evaluating solutions across three vectors: 

  1. AI Platform Security: Does this solution introduce a new set of cybersecurity and confidentiality risks, above and beyond current software, that could imperil our projects?
  2. Model Training: Are vendors training machine learning models on my project specifications and location data?
  3. AI Output Accuracy: How can we validate that what’s coming out of these models and solutions is reliable and accurate?

And we brought our good pal Arun Misra, Managing Director at Accenture to give us his expert take on each one of the three.

1. Does this [AI] solution introduce a new set of cybersecurity and confidentiality risks, above and beyond current software, that could imperil our projects?

With trustworthy, sophisticated vendors, AI solutions should not introduce any incremental cyber risk. So, what constitutes a “trustworthy, sophisticated” AI vendor? That is a topic for its own article, but let me give you two leading indicators, one for an AI application provider (e.g. Blumen) and one for model providers (e.g. OpenAI, Anthropic, dare I say…DeepSeek). 

Sophisticated AI application software vendors can institute both technical and legal controls to ensure the security of your critical site and operational data. Your AI applications vendors need to have addendums to their data processing with their AI model providers to guarantee your site or company data is never saved, stored, or used by the model provider. If so, there is no additional risk beyond your current non-AI software tools that store sensitive data. We recommend working directly with software vendors who can work closely with you on data privacy and security and be the layer between your project data and these model providers when building applications.

As for model providers, it is important to understand if they incorporated "adversarial training” in the development of their model. Adversarial training is when models are exposed to examples of cyber or malware attacks during their training phase. This enables the models to be able to identify and respond to these attacks in real-world scenarios. 

If your model provider or application vendor is handling critical energy infrastructure information (CEII) related to operational infrastructure, it is important they are compliant with NERC CIP (more here).

Arun’s Recommendation: Choose reputable AI vendors with strong security practices, thoroughly review vendor privacy policies and data processing procedures, and establish clear security requirements in vendor contracts. For model providers, ensure they have incorporated adversarial training. 

2. Are vendors training on my project specifications and location data?

Do you train models with customer data? That should be one of the first questions you ask as an AI solution buyer. For companies that are developing their own AI models, whether that be a language model that summarizes documents or a next-best action model to tell you the optimal routing of a transmission line, assume the answer is yes. 

One place you can guarantee model training is happening with inputs your employees provide are consumer-facing AI tools – the ones your employees can and are paying for themselves with a credit card. It is much harder to govern the use of your site information and project documents, if you are providing that information to a tool like ChatGPT or Claude. Those consumer-facing models can be powerful for generic problem solving in non-confidential enterprise functions, but much murkier in terms of their protection of your critical infrastructure site and project information.

A counterexample is what we do at Blumen. We use AI (Large Language Models alongside data science) to act as a “go-between” for geospatial data and regulatory texts to evaluate regulation applicability (Applicable or Non-Applicable). To do this, we are not feeding a big statistical model a lot of data points, we are telling the AI what to do based on a pre-determined set of parameters. 

Arun’s Recommendation: Ask your vendors if they are using customer data to train their models. Invest in AI education to help your employees better understand AI technologies to facilitate safer, more secure adoption.

3. Are the outputs and deliverables from AI tools reliable and accurate?

We’ve all heard about “AI model hallucination”, or the process by which Large Language Models in particular, derive outputs to questions based on generalized understanding, not factual information. Compounding this hallucination risk, most off-the-shelf AI tools (including many models that claim to be industry-specific to energy and infrastructure) do not process or interpret geographic information well. 

This poses a risk to energy and infrastructure professionals who are doing work in the physical world with a high threshold for safety and reliability. Inaccurate model results are much costlier if used to site a natural gas compressor rather than informing a chatbot to help consumers buy engraved pajamas.

Protecting against inaccurate model outputs is one of the moving targets in our space of AI-native software. To mitigate this risk, it is important to understand how your AI application or model vendor incorporates safeguards to check for inaccurate AI model outputs.

At Blumen, we have engineered a system to interpret both structured geospatial data and unstructured regulatory texts (e.g. local zoning code). The following process is how we ensure our model outputs (permitting and engineering intelligence for energy projects) are at parity with a human deliverable, if not more accurate:

  1. Work with experts (we have former environmental planners and project engineers in-house) to understand the process by which they complete a task
  2. Aggregate past outputs of the task created by said expert 
  3. Develop code to mirror the logic and approach of the expert. Iteratively validate the code against past outputs. The more ground truth, the better.
  4. Take the same outputs and hand them to a testing engineer to measure the accuracy of their LLM generated results.
  5. Once accuracy reaches the acceptable threshold across a set of evaluation criteria (succinctness, factual accuracy, correctly acknowledging data gaps), depending on the use case anywhere from 85-100%, start to implement in practice.

Arun’s Recommendation: For AI application layer, understand how your vendor incorporates safeguards to check for inaccurate AI model outputs. For AI model providers, ensure they take measures to mitigate bias in training data sets.

In closing

We are at the very beginning of using artificial intelligence-enabled tools to support the engineering & construction of our nation’s energy and telecommunications infrastructure.The potential benefits abound. Reduced development costs, higher conversion of potential projects into commercially operating projects, even improved job safety and lower environmental impacts of construction. The promise of AI seems to fit the mold for how we toe the line between safe, efficient, reliable project delivery and the need for speed. But as with all new technologies, capabilities compound over time. Unreliable players will try to capitalize on the moment with AI-enabled everything. And with the cyberthreat environment ripe, the stakes for building ample safe guards and finding trustworthy partners are high.

It’s time to build. We’re here to help.

Thank you! We will contact you shortly.
Oops! Something went wrong while submitting the form.