Tenable, an exposure management company, has released its Cloud AI Risk Report 2025, revealing that 70% of cloud workloads using AI services contain unresolved vulnerabilities. The report highlights the complex cyber risks associated with cloud-based AI, including manipulation, data tampering, and data leakage. The report highlights the current state of security risks in cloud AI development tools and frameworks, as well as in AI services offered by major cloud providers like Amazon Web Services, Google Cloud Platform, and Microsoft Azure.

The key  findings from the report include:

  • Cloud AI workloads aren’t immune to vulnerabilities: Approximately 70% of cloud AI workloads contain at least one unremediated vulnerability. In particular, Tenable Research found CVE 2023-38545—a critical curl vulnerability—in 30% of cloud AI workloads.
  • Jenga® -style1cloud misconfigurations exist in managed AI services: 77% of organizations have the overprivileged default Compute Engine service account configured in Google Vertex AI  Notebooks. This means all services built on this default Compute Engine are at risk.
  • AI training data is susceptible to data poisoning, threatening to skew model results: 14% of  organizations using Amazon Bedrock do not explicitly block public access to at least one AI  training bucket and 5% have at least one overly permissive bucket.
  • Amazon SageMaker notebook instances grant root access by default: As a result, 91% of  Amazon SageMaker users have at least one notebook that, if compromised, could grant unauthorized access, which could result in the potential modification of all files on it.

“When we talk about AI usage in the cloud, more than sensitive data is on the line. If a threat actor manipulates the data or AI model, there can be catastrophic long-term consequences, such as compromised data integrity, compromised security of critical systems and degradation of customer trust,” said Liat Hayun, VP of Research and Product Management, Cloud Security, Tenable. “Cloud  security measures must evolve to meet the new challenges of AI and find the delicate balance between  protecting against complex attacks on AI data and enabling organizations to achieve responsible AI  innovation.”

The Jenga-style concept, coined by Tenable, identifies the tendency of cloud providers to build one  service on top of the other, with “behind the scenes” building blocks inheriting risky defaults from one  layer to the next. Such cloud misconfigurations, especially in AI environments, can have severe risk  implications if exploited.