Forrester just evaluated the ten most significant sensitive data discovery and classification platforms. BigID came out as a Leader — with the highest possible scores across eleven criteria and top ranking in the Current Offering category. Here’s how we connect these findings to the capabilities needed to power your organization’s AI security initiatives.
When Forrester Research evaluates the sensitive data discovery and classification market, it scores platforms across a set of current offering and strategy criteria. Those criteria, in our view, reflect the core challenges enterprise teams are trying to solve today: Can platforms find sensitive data at scale? Do they classify it accurately and enrich it with context? Do they integrate into the broader security stack? Are they built for what’s coming next?
En The Forrester Wave™: Sensitive Data Discovery And Classification Solutions, Q2 2026, BigID was named a Leader — one of three vendors in that category out of ten evaluated. BigID received the highest possible score in eleven criteria. But the real story isn’t the scorecard — it’s what the evaluation reveals to us about where enterprise data security is heading, and the growing importance of the intersection of sensitive data discovery and AI security.
What Forrester Actually Found
The evaluation covered fifteen current offering criteria and seven strategy criteria. Forrester scored each vendor on a scale of 1 to 5, where 5 represents capabilities that are superior relative to other vendors in this evaluation. BigID received the maximum possible score — a 5 — across eleven of those criteria.
In the current offering category: cloud data source coverage, on-premises data source coverage, enrichment for classification, language support, tuning to improve accuracy, integrations, and secure-by-design commitments.
In the strategy category: innovation, roadmap, partner ecosystem, and adoption.
Strategy is a major component of this evaluation, with innovation and roadmap together accounting for 45% of the total score. BigID received the highest possible score in both categories. To us, that goes beyond a snapshot of current capabilities — it reflects how the BigID Next platform is designed to evolve, and how well it’s positioned to meet the next wave of data security and AI governance challenges.
Check out our highlights from the Wave:
- BigID’s “impressive strengths in discovery across both cloud and on-premises data sources (including mainframe environments), blend of classification techniques and enrichment, superior tuning capabilities, and broad set of integrations enable the platform to cover numerous use cases — from compliance and information governance to AI security and governance.”
- “BigID is engineered for performance and petabyte scale.”
- “BigID has a solid vision of an autonomous governance engine, and its excellent innovation strategy and well-defined roadmap of planned enhancements position it well to deliver.”
- Forrester’s take: “BigID is a compelling choice for multinationals, large organizations, and government entities with complex data environments and localization requirements.”
Those aren’t promotional statements. They’re findings from an independent evaluation.
Why the AI Security Angle Is the Real Story
Our reading of the Forrester evaluation reveals a clear pattern across the market: the highest-scoring vendors are those that built sensitive data discovery as a foundation — and extended that foundation into sécurité de l'IA, Gouvernance de l'IA, and agentic use cases.
This makes sense architecturally. You cannot govern AI without knowing what data it touches. You cannot enforce AI access policies without knowing where sensitive data lives. You cannot detect AI-driven data exfiltration without a classification layer that knows what’s sensitive and what’s not. Sensitive data discovery isn’t a prerequisite for AI security — it’s the substrate.
BigID’s vision of an “autonomous governance engine” is precisely the kind of framework that enterprise AI programs need: a platform that doesn’t just find sensitive data but continuously governs how it flows, who accesses it, and whether that access is appropriate in a world where AI agents are operating on it autonomously.
This is why BigID’s highest possible scores in the Innovation and Roadmap criteria matter to us — not just as future plans, but as indicators of how the platform is built to evolve. AI governance isn’t a feature you add to a data security platform. It requires the platform to be architected for it from the start — with coverage breadth, enrichment depth, integration flexibility, and petabyte-scale performance as preconditions.
The Four Capabilities That Make This an AI Security Story
1. Discovery at petabyte scale — including where AI actually trains and operates
AI models train on data. They operate on data. They store outputs in data systems. The attack surface of an enterprise AI program is, in large part, a data discovery problem: where is the sensitive data that AI can reach, create, or exfiltrate?
Forrester found BigID “engineered for performance and petabyte scale” with “impressive strengths in discovery across both cloud and on-premises data sources (including mainframe environments).” Our commitment to breadth and depth of data sources — from the most modern cloud data stores to legacy mainframe environments — covers the full scope of where enterprise sensitive data actually lives. An AI governance program built on incomplete discovery is built on sand.
2. Enrichment for classification — the signal layer AI governance needs
Basic classification tells you a file contains PII. Enrichment tells you it contains PII belonging to employees in a regulated jurisdiction, accessed by a service account that feeds a model you haven’t reviewed. That’s the difference between a label and an actionable signal.
BigID received the highest possible score in the enrichment for classification criterion — noting the “blend of classification techniques and enrichment” that adds context like data lineage and permissions to the classification layer.
For AI security programs, this enrichment is the signal layer that makes policy enforcement possible. Without it, you’re governing AI with incomplete information.
3. Integrations — the connective tissue of autonomous governance
AI governance doesn’t happen in a single platform. It happens across a stack: SIEM, SOAR, DLP, gouvernance des identités, cloud security posture management, and increasingly, AI-specific tooling. The integration layer is what turns data discovery findings into enforcement actions.
Forrester gave BigID the highest possible score in the integrations criterion— citing the partner ecosystem’s focus on “metadata exchange to remove silos across enterprise technology stacks and support autonomous workflows.”
Autonomous workflows are not an optional feature for AI governance programs. As AI agents act with increasing autonomy, the governance response has to match that pace. That requires integrations deep enough to trigger policy enforcement automatically — not just generate alerts.
4. The autonomous governance engine — from vision to architecture
The concept of an autonomous governance engine is worth unpacking, because it describes exactly what the AI security market is converging on. The goal isn’t a dashboard that shows you where sensitive data is. It’s a system that continuously discovers, classifies, enriches, and governs sensitive data — and enforces policy autonomously, without requiring a human in every loop.
BigID also received the highest score possible in the innovation and roadmap criteria. The autonomous governance engine is the architectural bet that AI security requires platforms to make. BigID is making it.
What This Means If You’re Evaluating Data Security Platforms
If you’re a CISO, data security leader, or privacy officer evaluating sensitive data discovery and classification platforms, the Forrester Wave gives you a rigorous independent baseline. Three vendors made the Leaders category. The differences between them map to specific organizational requirements.
Forrester identified BigID as a compelling choice for “multinationals, large organizations, and government entities with complex data environments and localization requirements.” If that describes your organization — particularly if your environment spans cloud, on-premises, and mainframe; if your data estate is measured in petabytes; if you need coverage across multiple jurisdictions and languages; and if your AI security program is a real priority rather than a roadmap item — we believe the independent evaluation points clearly in one direction.
If you’re specifically evaluating platforms for AI security and governance use cases, the additional question to ask is: what is this platform’s architecture for governing AI? Not just what features does it have, but does the underlying platform — the discovery breadth, the enrichment depth, the integration layer, the scale — support the autonomous governance model that AI programs require? Those are the criteria where the Forrester evaluation is most useful, and most revealing.
The Bigger Picture: Why Sensitive Data Discovery Is Now an AI Security Problem
Enterprise security teams are grappling with a structural shift. For years, data security was largely a compliance function — find PII, classify it, report on it, protect it at rest and in transit. The frameworks were relatively stable. The perimeter was relatively knowable.
AI changes both of those assumptions. When AI systems can discover, synthesize, and act on sensitive data autonomously — and when AI agents can move data across systems without human oversight — the data security perimeter becomes effectively infinite. The classification problem becomes a real-time problem. The governance problem becomes an autonomous-response problem.
Sensitive data discovery and classification platforms are being forced to evolve into something more foundational: the data intelligence layer that all AI security, AI governance, and AI risk programs are built on. BigID believes that the vendors that make the Leaders category in Forrester’s Q2 2026 evaluation are the ones that recognized this evolution early enough to architect for it.
The highest possible scores in the Innovation and Roadmap criteria — in a market undergoing exactly this kind of foundational shift — are the scores that matter most to us.
Read the independent evaluation.
The Forrester Wave™: Sensitive Data Discovery And Classification Solutions, Q2 2026 evaluated ten providers across 22 criteria. See how the scores break down and what the findings mean for your data security and AI governance program.
Forrester does not endorse any company, product, brand, or service included in its research publications and does not advise any person to select the products or services of any company or brand based on the ratings included in such publications. Information is based on the best available resources. Opinions reflect judgment at the time and are subject to change. For more information, read about Forrester’s objectivity ici.
