Ir al contenido

Gobernanza de la IA en la práctica: Perspectivas sobre liderazgo en seguridad del Foro de Liderazgo en Gobernanza de la IA

AI adoption is accelerating at an unprecedented pace. But Gobernanza de la IA — the frameworks, policies, and operational controls organizations use to manage how AI systems access, process, and act on enterprise data — is struggling to keep up.

Key Takeaways: What AI Security Leaders Need to Know

- AI governance must be operational, not just a policy. The biggest challenge with AI isn’t the technology — it’s governing how AI systems interact with enterprise data in real time.

- The AI risk problem is a data problem. Most AI governance failures stem from uncontrolled data access, not model flaws. Data visibility is the foundation of any AI security strategy.

- Agentic AI requires identity governance. Autonomous AI agents need the same access controls as human employees — centralized identity, least privilege, and continuous monitoring.

- AI security spans the full ML lifecycle. Training data integrity, model behavior monitoring, and prompt interface security are all part of enterprise AI security — not just the production API.

- Overconfidence in existing frameworks is an AI risk. Many organizations believe their current privacy and security programs cover AI. Most do not. Continuous, cross-functional AI governance is required.

- AI security leadership starts with a single question: Do you know your data? Without data visibility, governing AI is nearly impossible.

What is AI governance?

AI governance refers to the operational frameworks, controls, and accountability structures organizations use to manage how AI systems interact with enterprise data, people, and processes. Effective AI governance spans data visibility, identity management, security monitoring, and regulatory compliance — and must be embedded continuously across the Ciclo de vida de la IA, not limited to upfront policy documents.

That gap between what organizations want AI to do and what they’re prepared to govern was the central theme of the AI Governance Leadership Forum, where security, privacy, and AI security leaders gathered to discuss what responsible AI looks like in the real world.

Across the keynote and two expert panels, one insight surfaced repeatedly: AI governance can’t live in policy documents anymore. It has to be operational. And the organizations that get AI governance right won’t just move the fastest — they’ll govern the smartest.

Let’s break down how these insights play out in practice.

AI Security in an Agent-Driven Enterprise: Governance & Risk

Keynote: Francis Odum, Founder & CEO, Software Analyst Cyber Research

AI security has reached an inflection point for enterprise governance. Organizations are adopting AI rapidly — often under pressure from boards and executives eager to move faster. But the AI security frameworks needed to manage that growth are still catching up.

AI adoption is accelerating because boards and CEOs are pushing for it — but security is still lagging behind.

 

— Francis Odum, Founder & CEO, Software Analyst Cyber Research

The Rise of Agentic AI and What It Means for Security

The shift isn’t just about IA generativa tools anymore. Enterprises are moving toward agentic AI systems — autonomous agents that can reason, interact with systems, and complete tasks with minimal human oversight. That dramatically changes the AI governance and AI risk equation.

These agents interact with enterprise systems the same way employees do: connecting to files, APIs, CRM systems, internal documentation, and customer records. Which means AI risk is no longer theoretical — it’s operational.

The AI Risk Problem Is Actually a Data Problem

Gestión de riesgos de IA must start with data. While many organizations focus on model security, the bigger risk lies in data access. AI systems increasingly connect to sensitive enterprise data sources, dramatically expanding the attack surface.

Most governance failures won’t come from model weights — they’ll come from connected context: files, emails, customer data, and source code.

— Francis Odum

Gain visibility into AI data access, identity, and risk across environment.

Secure AI Systems and Reduce Risk

A Framework for AI Security Leadership

No matter how advanced AI becomes, the problem still starts as a data problem. Odum outlined three priorities for AI security leadership in the agentic era:

I’ve been really impressed with BigID’s move into agentic identity governance – extending identity visibility into the data layer. As non-human identities play a bigger role, understanding how they interact with sensitive data is critical.

 

— Francis Odum

AI Governance for Privacy: Designing for Unpredictable Systems

Panel: Aaron Weller (HP), Chantra Stevenson (Alaska Airlines), Mae-Beth Magno (Boeing)

Traditional privacy-by-design frameworks assumed systems behaved in deterministic ways. Generative AI breaks that model. This session tackled a fundamental AI governance challenge: how privacy frameworks must evolve for AI systems that don’t behave predictably.

AI Governance Must Become Continuous

AI systems learn, evolve, and sometimes behave unexpectedly. As a result, AI governance must shift from static reviews to continuous oversight. Organizations now need:

  • Ongoing behavioral monitoring
  • Drift detection and anomaly alerts
  • Model behavior testing post-deployment
  • Post-deployment AI governance reviews

Traditional privacy by design assumed we knew exactly what a system would do. With generative AI, we’re designing for systems that may surprise us.

 

— Aaron Weller, HP

Overconfidence Is a Growing AI Risk

A critical insight from the panel: many organizations believe their existing privacy frameworks already cover AI. That assumption is itself an AI risk.

A lot of organizations are overconfident about their privacy readiness for generative AI.

 

— Chantra Stevenson, Alaska Airlines

New AI risks — from prompt injection to training data leakage — require AI governance approaches that go beyond traditional privacy programs.

AI Governance Requires Cross-Functional Collaboration

This isn’t a gate process anymore — it’s infrastructure.

 

— Mae-Beth Magno, Boeing

AI compliance and privacy governance can’t operate in isolation. They must be embedded directly into product development — working across engineering, product, and data governance teams. The goal isn’t to slow innovation; it’s to build guardrails that scale with it.

Securing AI Systems from Training to Production

Panel: Sabna Sainudeen (Carlsberg), Samaresh Singh (HP), Marissa Palmer (TrueCar)

Securing AI systems requires a fundamentally different approach than traditional infrastructure security. AI security must span the entire machine learning lifecycle — from training data integrity through model deployment and inference.

AI Expands the Enterprise Security Surface

AI systems introduce entirely new security considerations. Organizations must now secure:

Security has to be embedded across the entire machine learning lifecycle — not just the API or production layer.

 

— Samaresh Singh, HP

Training Data Integrity: The Most Critical Layer

One of the biggest AI security vulnerabilities sits at the very beginning of the lifecycle: training data integrity. If training data is compromised, the model’s behavior may become unreliable — and the issue may only surface much later.

If training data is manipulated early, the model’s behavior may only reveal the problem much later.

 

— Samaresh Singh, HP

AI Security Demands Behavioral Monitoring

AI systems change how security teams must operate. Instead of focusing only on infrastructure alerts, teams must monitor model behavior and anomalies.

We have to look for behavioral anomalies — because AI doesn’t always fail in obvious ways.

 

— Marissa Palmer, TrueCar

AI Security Builds on Existing Foundations — But Requires New Layers

Despite the new complexity, many traditional security principles still apply. Organizations can reuse CI/CD pipeline security, dependency scanning, and infrastructure monitoring — but must expand into data governance, model monitoring, and AI-specific threat detection.

We can reuse many of our existing security practices — but the data layer and model behavior introduce entirely new challenges.

 

— Sabna Sainudeen, Carlsberg Group

The Strategic Shift: From AI Adoption to AI Governance

Across all three sessions, one insight became clear: the biggest challenge with AI isn’t the technology itself — it’s governing how the technology interacts with enterprise data. AI systems amplify existing risks around data access, identity governance, security monitoring, and regulatory accountability.

The organizations that succeed with AI won’t just move the fastest — they’ll govern the smartest. And AI governance starts with a simple question:

Do you actually know your data?

Because without that visibility, governing AI — and managing AI risk — becomes nearly impossible.

Frequently Asked Questions About AI Governance and AI Security

What is AI governance and why does it matter for enterprise security?

AI governance is the set of operational frameworks, policies, and controls that organizations use to manage how AI systems access data, make decisions, and operate within enterprise environments. It matters for enterprise AI security because ungoverned AI dramatically expands the attack surface — connecting to sensitive files, customer records, APIs, and source code without adequate oversight.

What are the biggest AI risks for enterprise organizations in 2025?

The biggest AI risks for enterprises include: uncontrolled data access by AI agents, training data manipulation, prompt injection attacks, training data leakage, overconfidence in existing privacy frameworks, and lack of behavioral monitoring post-deployment. AI security experts consistently emphasize that AI risk management must start with data visibility.

How is agentic AI different from traditional AI governance challenges?

Agentic AI systems are autonomous agents that can reason, take actions, and interact with enterprise systems — including files, APIs, CRM platforms, and internal documentation — without constant human oversight. This makes agentic AI governance significantly more complex than traditional AI governance, as these agents behave more like digital employees and require identity and access controls accordingly.

What does AI security leadership look like in practice?

AI security leadership means operationalizing AI governance across data, identity, and security functions. It involves converging data governance with identity management, treating AI agents as managed digital identities, securing the full ML lifecycle from training data through inference, and building continuous monitoring into AI deployments rather than relying on one-time reviews.

How should organizations build an AI governance framework?

An effective AI governance framework should: (1) establish data visibility as the foundation, (2) integrate identity and access controls for both human and AI agents, (3) embed security and privacy reviews continuously throughout the AI lifecycle, (4) include behavioral monitoring and drift detection post-deployment, and (5) align cross-functional teams including security, privacy, engineering, and legal around shared AI risk accountability.

What is the relationship between data governance and AI security?

Data governance and AI security are inseparable. AI systems can only be secured if organizations first understand what data exists, where it lives, and who — or what — can access it. Data governance provides the foundation for AI security by enabling organizations to control what training data is used, limit AI agent access to sensitive data, and detect anomalous data access patterns.

Watch the Full Forum Recording

Learn how BigID helps organizations discover, secure, and govern data for AI — from training through production.

Contenido

Conecte los puntos en datos e IA a través de la gobernanza, el contexto y el control

Optimice sus iniciativas de IA, reduzca el riesgo y acelere la innovación segura a través del descubrimiento unificado, la clasificación, la gobernanza del ciclo de vida y la catalogación rica en contexto.

Descargar resumen de la solución