OpenText home page.
Tech topics

What is generative AI security?

Illustration of IT items with focus on a question mark

Overview

 A person working on a laptop

Generative AI security (sometimes referred to as AI code risk) is the practice of protecting applications and organizations from the unique risks introduced by generative artificial intelligence (GenAI) systems and AI-assisted coding tools. It also includes using AI responsibly to enhance application security testing and remediation.

The peril and promise of generative AI in AppSec

Generative AI is revolutionizing application security by enhancing cybersecurity tools.

Read the report

Generative AI security

Why is GenAI security important?

Generative AI has rapidly transformed software development, enabling developers to generate code faster and at scale. While this accelerates innovation, it also introduces new challenges:

  • Insecure code generation: AI tools may suggest code that contains vulnerabilities.
  • Prompt injection attacks: Malicious inputs can manipulate AI models into unsafe behavior.
  • Data exposure risks: Sensitive information may leak through training sets or model outputs.
  • Overreliance on AI: Developers may lack the context to evaluate whether AI-generated code is secure.

At the same time, adversaries are beginning to use AI to automate attacks, making the threat landscape even more dynamic. Organizations must adopt strategies to mitigate these risks while safely leveraging AI’s benefits.


How does generative AI security work?

Securing generative AI and AI-generated code requires a combination of governance, testing, and monitoring.

Key practices include:

  • Secure coding checks: Running AI-generated code through SAST, DAST, SCA, and IaC testing.
  • Policy enforcement: Setting guardrails for when and how AI coding assistants can be used.
  • Threat detection: Identifying prompt injection, data leakage, and misuse risks in AI-enabled applications.
  • Developer enablement: Educating teams on safe usage of AI coding tools.
  • AI-augmented AppSec: Using GenAI responsibly to reduce false positives and accelerate remediation.

Benefits of GenAI security

  • Reduced risk: Prevent insecure AI-generated code from reaching production.
  • Safer adoption: Enable teams to use AI coding assistants responsibly.
  • AI application security: Safeguard AI-powered apps by addressing unique risks such as model manipulation, data exposure, and adversarial inputs.
  • Continuous oversight: Monitor and manage AI-related vulnerabilities over time.
  • Faster remediation: Leverage AI to streamline vulnerability triage and fixes.
  • Balanced innovation: Reap the benefits of AI while minimizing new risks.

Generative AI security with OpenText Application Security

OpenText empowers enterprises to address AI-related risks while responsibly leveraging AI for security improvements:

  • OpenText™ Application Security Aviator™ (Fortify): AI-powered assistant that reduces false positives and accelerates remediation.
  • SSR research: Continuous updates to detect vulnerabilities in AI/ML frameworks and APIs.
  • Developer-first enablement: Guidance and training to help developers use AI tools securely.
  • Integrated testing platform: Comprehensive SAST, DAST, SCA, IaC, and API testing for AI-generated code.
  • Policy-driven orchestration: Govern AI usage through application security posture management (ASPM).

Key takeaway

Generative AI security helps organizations embrace the productivity benefits of AI while reducing the risks of insecure code, data leakage, and novel attack vectors introduced by AI-powered systems.