Zum Inhalt springen
Alle Beiträge anzeigen

Schlüsselstrategien zur Sicherung von KI und GenKI: Erkenntnisse aus Allie Mellens Keynote-Session

AI isn’t just transforming your business – it’s rewriting the rulebook for risk, trust, and governance. The recent BigID Digital Summit brought together industry experts to explore AI risks, security measures, and responsible innovation strategies, attracting over 1,100 participants eager to dive into AI governance and risk reduction.

The keynote session featuring Allie Mellen, Principal Analyst at Forrester Research specializing in Security & Risk, offered an illuminating perspective on the challenges and strategies surrounding AI and GenAI security. For professionals operating in the realm of cybersecurity, particularly CISOs, this keynote presented actionable insights into how organizations can secure AI models and infrastructure while preparing for risks posed by increasingly agentisch Systeme.

The Evolving Risk Landscape in AI Security

Allie Mellen began her talk by emphasizing the dynamic and rapidly evolving nature of the AI security space. With developments such as advancements in Generative AI (GenAI) applications, hardware vulnerabilities, and the impact of models like Zwillinge und ChatGPT, the urgency to safeguard models and infrastructures cannot be overstated.

Mellen pointed out that AI attacks currently remain academic, often sourced from researchers exploring vulnerabilities or ethical “white hat” hacking techniques. However, the landscape is shifting:

Now that is going to change. We are going to see these attacks pick up quite a bit more… far more focused on not just the models themselves, but also the underlying data and the infrastructure surrounding the models.

Key Vulnerabilities and Threats

One of the most compelling segments of the keynote was Mellen’s review of the OWASP Top 10 threats for Large Language Models (LLMs) and GenAI applications. These threats spotlight potential attack avenues at every layer of GenAI systems, highlighting vulnerabilities that CISOs must address.

OWASP GenAI Threats:

  • Prompt Injection Attacks: Being able to go into a prompt and inject something into it that gets you an output you otherwise wouldn’t be able to get.
  • Sensitive Information Disclosure: AI systems, especially those integrated with external APIs or datasets, may inadvertently expose private or proprietary data.
  • Model/Data Poisoning: By tampering with training or operational data, attackers can corrupt the underlying knowledge base of a system, leading to misinformation or poor decision-making.
  • Overprivileged AI Agents: A recurring theme in AI security is the importance of maintaining strict control over agent permissions:

We need to ensure a minimum set of permissions, the minimum capabilities, tools, and decision-making ability for AI agents.

Risks in AI Agentic Systems

AI agents – the growing trend towards systems capable of independent action and decision-making – present substantial risks. Mellen divided these risks into tangible categories that organizations should meticulously evaluate:

  1. Goal and Intent Hijacking: Attackers may manipulate AI agents to subvert their intended purpose.
  2. Cognitive and Memory Corruption: Poisoning the data or memory of AI models can lead to significant failures, misinformation, and operational mistakes.
  3. Unrestrained Agency: Excessive permissions granted to AI agents could exacerbate the attack surface, leaving systems vulnerable to unauthorized actions or decisions.
  4. Datenleck: Data loss prevention (DLP) is critical to safeguard sensitive information shared or generated by AI agents, particularly when they communicate externally.

Frameworks and Best Practices: The Aegis Framework

Mellen then introduced the Aegis Framework – Agentic AI Guardrails for Information Security – as a proactive approach to handling AI risks. The framework provides three core principles CISOs should integrate into their AI security roadmap:

  • Least Agency: Ensure that AI agents are equipped with only the bare minimum permissions and access required to execute specific tasks.
  • Continuous Risk Management: Establish a robust risk management framework for monitoring, evaluating, and mitigating risks in real-time.
  • Guardrails and Governance: Implement strict policies and procedures to control actions AI agents can take, as well as their communications within internal systems and externally:

Regular model monitoring, input/output validation, and model guardrails…all must be baked into your process to ensure security.

Strengthening Organizational Knowledge and Governance

Beyond technical implementations, Mellen stressed the importance of education and cultural transformation within organizations. Security leaders must transition from being perceived as blockers to trusted advisors who champion innovation:

Embrace [AI] and make it very clear to the organization that you are embracing, understanding, and valuing that use of AI.

Educating non-security stakeholders like data scientists is vital to fostering security hygiene and best practices. Mellen encouraged CISOs and security teams to create an open environment where queries about AI are welcome.

Moreover, empowering security champions within data science teams can help bridge gaps between technical innovation and secure implementation.

Bringing It All Together: Zero Trust and Defensive AI

As AI capabilities expand, implementing zero trust principles is essential:

The idea of geringste Privilegien is going to be something that significantly helps you.

Defensive uses of AI were another cornerstone of Mellen’s session. Organizations must leverage the strengths of AI to detect and counter AI-enabled threats. From understanding AI-enabled phishing campaigns to managing insider threats via robust identity and access management, proactive defenses are critical.

We have to use AI to defend against AI-enabled attacks. There’s no other option.

Where BigID Comes In

For CISOs and security teams, protecting AI starts with protecting data. Every AI model – whether built or bought – relies on data that must be discovered, classified, governed, and secured with precision. That’s where BigID delivers.

BigID gives security leaders the visibility, control, and intelligence to safeguard the data fueling AI. The platform automatically discovers and classifies all sensitive, regulated, and proprietary data across cloud, on-prem, and AI pipelines, so teams can identify what data is being used, where it’s flowing, and who can access it.

With BigID’s AI Data Security capabilities, organizations can:

BigID helps enterprises build the foundation for secure, responsible AI – strengthening trust, minimizing risk, and accelerating innovation with confidence.

In short: before you can secure AI, you need to secure your data. BigID makes that possible.

Allie Mellen’s keynote challenged security leaders to rethink their approach to AI security. By leveraging frameworks like Aegis and implementing solutions from industry leaders like BigID, organizations can safeguard their AI-enabled innovation while navigating the risks of agentic systems and GenAI applications. In a world increasingly defined by AI-powered decision-making, these steps are not just options – they are necessities.

Missed the live discussion? Stay in the loop for future events oder schedule 1:1 demo to future-proof your AI Security Program.

Inhalt

AI TRiSM: Sicherstellung von Vertrauen, Risiko und Sicherheit in der KI mit BigID

Laden Sie das Whitepaper herunter, um zu erfahren, was AI TRiSM ist, warum es jetzt wichtig ist, welche vier Hauptsäulen es hat und wie BigID bei der Implementierung des AI TRiSM-Frameworks hilft, um sicherzustellen, dass KI-gesteuerte Systeme sicher, konform und vertrauenswürdig sind.

White Paper herunterladen