In Part One of our Data Privacy Week 2026 series, CyberTech Insights analysts explored why resilience, identity, and governance will define the next era of cybersecurity. We are moving beyond compliance. The conclusion was unmistakable: in an AI-driven, always-on enterprise, privacy cannot be promised through policy. It must be proven under pressure.
Part Two moves from principle to practice.
IDC says security is a business enabler. As organizations scale AI, automate workflows, and decentralize decision-making, data privacy is being tested not in audits, but in real-world conditions—credential misuse, agentic AI misfires, shadow access, ransomware recovery windows, and regulatory scrutiny that arrives after the damage is done.
To understand how privacy is being operationalized in 2026, CyberTech Insights invited global cybersecurity leaders, CISOs, and technology experts to share how they are redesigning systems to withstand failure—not just prevent it.
Contributing experts in this installment include:
- Becca Harness, Chief Information Security Officer, Deltek
- Martin Raison, Co-founder & CTO, Nabla
- Melissa Bischoping, Head of Security Research, Tanium
- Bobby Ford, Chief Strategy & Experience Officer, Doppel
- Corey Nachreiner, Chief Security Officer, WatchGuard
- Natalia Vasilyeva, EVP, Marketing & Strategy, Anzu
- Monica Landen, Chief Information Security Officer, Diligent
- Dan Balaceanu, Chief Product Officer & Co-Founder, DRUID AI
- Steve Van Till, Founder & President, Brivo
- Mike Baker, Vice President & Global CISO, DXC Technology
Together, their perspectives reveal a clear truth: privacy in 2026 is no longer about static controls. Rather, it is about survivability, traceability, and trust when things go wrong.
Privacy Is No Longer a Checkbox—It’s a Leadership Choice
Despite years of awareness, many organizations still approach privacy as a legal requirement rather than a core responsibility.
According to the Cisco 2023 Data Privacy Benchmark Study, 95% of organizations now say privacy is a business imperative — and 94% report that customers won’t buy from them if their data isn’t properly protected. This marks a clear shift from seeing privacy as merely a legal checkbox to recognizing it as core to customer trust and commercial value.
Becca Harness, CISO at Deltek, is direct about the root problem:

“This Data Privacy Week, organizations don’t need to be reminded that privacy matters—it’s something they’ve known for years. The real challenge is that many businesses still treat privacy as a legal obligation rather than a core responsibility. Data privacy can no longer be treated as a checkbox. It must be foundational to how enterprises and project-based businesses build trust and drive innovation.”
In practice, this mindset gap creates recurring failures.
“Across project-based industries, privacy, compliance, and digital transformation are becoming increasingly interconnected. When privacy is integrated at the outset of a system or process, rather than bolted on at the end, it removes friction, reduces risk, and improves operational agility.”
Harness emphasizes that privacy failures are not abstract—they impact people:
“This approach doesn’t just satisfy regulatory requirements; it protects the human beings behind the data.”
And the long-term cost of ignoring that reality is steep:
“Organizations that treat privacy as a strategic priority, rather than a compliance exercise, will earn lasting trust in an AI-driven world. Those who don’t may find that trust, once lost, is far more difficult to rebuild than fixing any system they failed to protect.”
CyberTech Insights Analysis: Privacy Is No Longer a Checkbox — It’s a Leadership Choice
While compliance frameworks like GDPR, CCPA, and others have driven baseline investments in privacy, leading organizations are now recognizing that compliance alone does not satisfy customers, regulators, or boards. As the Cisco benchmark data shows, the vast majority of companies now view privacy as a business imperative, and not merely a legal requirement.
This shift matters for three reasons:
- Commercial Expectations Are Driving Change
When nearly all customers say they won’t engage with a company whose data isn’t protected, privacy becomes a differentiator in market positioning and customer retention—not a back-office function. - Operational Integration Is the New Standard
As Becca Harness notes, embedding privacy at the outset of system design reduces risk and operational friction. This aligns with broader governance research showing that privacy programs are more effective when paired with data governance and identity controls rather than siloed compliance functions. - Strategic Trust Is Harder to Rebuild Than a System
Leaders who delay treating privacy as strategic inevitably face regulatory penalties and irreparable loss of trust. This theme is echoed by privacy experts across industries.
In short, the data supports what the panel calls out: the mindset shift from “check the box” to “build for trust” is no longer optional if an organization hopes to compete in an AI-powered digital economy.
Agentic AI: Where Speed Without Governance Becomes Risk
As AI moves deeper into enterprise workflows, governance is no longer optional — it is the control surface for privacy. According to Deloitte’s 2026 State of AI in the Enterprise report, only about 21% of companies currently have robust oversight and safety mechanisms for autonomous AI systems, even as adoption is projected to increase sharply in the next two years. This mismatch between rapid integration and governance maturity underscores the privacy and operational risks of agentic AI.
Martin Raison, Co-founder and CTO at Nabla, highlights the tension clearly, particularly in healthcare:
“Data Privacy Week is an important reminder for leaders that as AI becomes more embedded in enterprise workflows and decision-making, governance plays just as pivotal a role as accelerating technical capabilities.”
AI’s potential is undeniable—but so are the risks:
“In healthcare, AI is a huge asset—it can analyze patient data, including medical history, scans, and lab results, to identify the root causes of health conditions. However, in a field where data privacy is so top of mind, these capabilities require strong guardrails and human oversight to be deployed safely.”
Raison warns that speed without policy alignment creates exposure:
“Often, we see companies rush to accelerate AI capabilities and deploy agents without extending data access policies or understanding how they act on behalf of users. This amplifies risk and erodes trust.”
The conclusion mirrors a broader industry shift:
“The most successful AI strategies will treat privacy and security as foundational principles rather than afterthoughts.”
CyberTech Insights Analysis: Governance Is the Missing Control Layer
The data reinforces what security leaders are seeing firsthand: AI adoption is outpacing governance maturity, creating a growing mismatch between adoption and oversight.
As agentic systems gain autonomy across enterprise workflows, insufficient oversight turns speed into risk. Without clear policies defining what AI agents can access, how they act, and how their decisions are audited, privacy exposure scales silently and rapidly.
The implication is clear.
Agentic AI cannot be governed retroactively. Organizations that embed governance, visibility, and human oversight at design time are far better positioned to harness AI’s benefits without eroding trust. Those that don’t risk discovering their privacy gaps only after autonomous systems have already amplified them.
Identity-First Security: The New Control Plane for Privacy
Across nearly every modern data privacy failure, one factor consistently appears: misused identity. According to the Verizon 2024 Data Breach Investigations Report, 74% of breaches involve the human element, including stolen credentials, misuse of privileges, or social engineering. IBM’s Cost of a Data Breach Report further confirms that compromised credentials remain the most common initial attack vector, driving prolonged exposure and higher business impact.
As AI agents, APIs, SaaS platforms, and third-party integrations proliferate, identity has become the primary control plane for data access. Firewalls and perimeters no longer define exposure—permissions do.
In AI-driven enterprises, access is no longer binary or human-only. Applications, bots, agents, and automated workflows all act with authority. When that authority is poorly governed, privacy collapses quietly.
This is why identity-first security is reshaping data protection strategies. Instead of asking “Where is the data?”, leading organizations are asking:
- Who—or what—can access it?
- Under what conditions?
- Can that access be verified continuously, not assumed?
Identity-centric architectures reduce blast radius, surface misuse earlier, and provide the forensic clarity required when regulators, customers, or boards demand answers.
CyberTech Insights Analysis: Identity Is Where Privacy Fails First
The data makes one point unavoidable: privacy does not fail at the perimeter—it fails at the permission layer. As identities now represent people, machines, services, and AI agents, access decisions have become the single most important determinant of whether sensitive data remains protected.
Organizations that prioritize identity-first security gain two critical advantages: earlier detection of misuse and smaller blast radius when controls fail. Those that don’t often discover privacy incidents only after trusted access has already been abused—when response options are limited and trust erosion is already underway.
Agentic AI and Shadow AI: When Innovation Outpaces Governance
Agentic AI has shifted privacy risk models fundamentally. Gartner predicts that by 2026 more than 80% of enterprises will be using generative AI technologies, while fewer than one-third will have mature governance in place, creating a widening gap between innovation and control. Microsoft’s research further shows that the majority of employees are already using AI tools at work, often without formal approval or guardrails—accelerating shadow AI risk.
Unlike traditional applications, agentic systems:
- Act autonomously
- Chain decisions across systems
- Persist beyond a single interaction
Melissa Bischoping, Head of Security Research at Tanium, expands on the operational challenge:

“As AI agents and workflows become an undeniable part of the modern enterprise, data privacy expands into a complex ecosystem that many organizations are scrambling to understand and govern.”
Unchecked automation scales risk as efficiently as it scales productivity:
“While AI has given us unprecedented ability to execute sophisticated workflows at speed and scale, we also understand that—if ungoverned and unchecked—it can introduce unprecedented risk and loss of data at that same scale.”
Operational maturity starts with visibility:
“To lead responsibly as an AI-forward technologist, build on a strong foundation of data governance and visibility first.”
And it requires precise answers—not assumptions:
“Data privacy in the era of AI requires a clear, accurate, real-time answer to the questions, ‘What AI agents exist in my environment? What data/systems can they access? Under what permissions can they access systems? And do I have governance and controls to ensure autonomous workflows and agentic actions can be traced and audited with confidence?’”
CyberTech Insights Analysis: Shadow AI Is the Silent Multiplier
Shadow AI compounds the risk. Employees adopt AI tools faster than security teams can evaluate them. RAG-enabled chat interfaces, internal copilots, and workflow automations often launch without clear boundaries on what data can be submitted, retained, or reused.
The result is not always a “breach” in the traditional sense—but uncontrolled data exposure at scale, often invisible until damage is irreversible.
Operationally mature organizations are responding by:
- Enforcing AI discovery and classification
- Restricting data permissions at the agent level
- Requiring human-in-the-loop validation for sensitive actions
- Treating AI governance as a living system, not a one-time policy
Identity Misuse: The Quiet Failure Behind Most Privacy Incidents
While AI expands the attack surface, identity remains the most exploited control plane. Microsoft’s Digital Defense Report shows that the overwhelming majority of attacks now rely on compromised credentials, while identity-focused research from Ping Identity indicates that identity-related breaches remain widespread and persistent, often driven by fragmented access controls rather than perimeter failures.
Bobby Ford, Chief Strategy & Experience Officer at Doppel, explains how attackers are adapting:

“As technology advances, so do the attackers using it. We’re seeing identity-based threats evolve faster than ever, with adversaries learning to exploit the trust people place in AI platforms.”
The challenge is behavioral as much as technical:
“The challenge isn’t that people are unaware, it’s that the positive impact of their use seems to outweigh the negative consequences of their misuse.”
Closing that gap requires sustained effort:
“Our responsibility now is to close that gap; to build awareness, resilience, and safeguards that evolve as fast as the threats themselves.”
Corey Nachreiner, CSO at WatchGuard, reinforces that most privacy failures no longer begin with perimeter breaches:

“Data privacy risk today isn’t primarily caused by attackers breaking through a firewall, it’s driven by identity compromise and the misuse of trusted access.”
Often, the entry point is deceptively simple:
“In many cases, these attacks start with something as simple as a deceptive link or download, underscoring the importance of user awareness alongside technical controls.”
Fragmented defenses create opportunity:
“When those layers operate in silos, gaps emerge that attackers are quick to exploit.”
And basic hygiene still matters:
“Simple measures like verifying download sources, using multi-factor authentication, and maintaining strong credential hygiene can stop attackers even when credentials are targeted.”
CyberTech Insights Analysis: Identity Governance Must Evolve Continuously
The data reinforces a critical reality: identity misuse is not a static problem, and it cannot be solved with one-time controls. As AI platforms increase reliance on trusted access—across humans, applications, and agents—identity governance must evolve continuously to remain effective.
Organizations that treat identity and AI governance as living systems (regularly reassessed, monitored, and adapted) are far better positioned to detect misuse early and limit blast radius. Those that rely on fragmented or static identity controls often discover privacy failures only after trusted access has already been abused, when response options and trust recovery are far more limited.
Privacy-First Measurement: Proving Value Without Breaking Trust
As privacy controls tighten and traditional identifiers disappear, organizations are being forced to rethink how performance is measured—without defaulting to surveillance-based models. The assumption that privacy necessarily undermines insight or revenue is increasingly being challenged.
Natalia Vasilyeva, EVP of Marketing & Strategy at Anzu, argues that privacy and performance are not opposing forces:

“Privacy and measurement can coexist — and gaming is proving it. In a fragmented, cross-platform ecosystem where cookies and MAIDs are declining, privacy-first identity solutions like unified IDs help bring consistency and governance to addressability and measurement. That means advertisers can prove outcomes responsibly, and publishers can monetize without compromising player trust.”
In practice, this shift reframes privacy as an enabler of durable value rather than a constraint. When identity is governed, transparent, and purpose-limited, organizations gain something more resilient than raw data access: trust-based addressability that holds up as regulations, platforms, and consumer expectations continue to evolve.
Trust Is the Real System Boundary
As data privacy becomes harder to define technically, it is becoming easier to define emotionally.
Customers, employees, and partners no longer evaluate privacy based on frameworks or certifications. They judge it based on outcomes:
Was my data respected?
Was it used as expected?
Was the organization transparent when something went wrong?
In an AI-driven world, trust has become the most fragile—and most valuable—asset organizations manage. It is built slowly through consistency, restraint, and accountability, and lost quickly when systems behave in ways people don’t understand or didn’t consent to.
This is why privacy today is inseparable from leadership behavior. The decisions organizations make about visibility, escalation, disclosure, and recovery shape trust far more than any policy document. When pressure arrives, trust is not tested by what organizations say—but by what they do.
Governance Gaps: When AI Moves Faster Than Oversight
At the enterprise level, governance remains the weakest link. IBM’s Global AI Adoption Index shows that fewer than one-third of organizations have formal AI governance policies, even as AI tools are rapidly embedded across operations. Gartner further warns that a majority of AI deployments over the next two years will lack adequate risk and governance controls, creating systemic privacy and compliance exposure.
Monica Landen, CISO at Diligent, warns that AI adoption is racing ahead of accountability:

“Data Privacy Week comes at a moment when the gap between AI adoption and AI governance has never been wider.”
The consequences are increasingly visible:
“In some instances, companies have deployed generative AI solutions only to discover too late that they have inadvertently exposed sensitive customer data or violated compliance requirements.”
The data is stark:
“Recent research shows that 97% of organizations that experienced an AI-related security incident lacked proper AI access controls—a striking and preventable gap.”
And boards are not keeping pace:
“While 22% of boards have adopted formal AI governance, ethics or risk policies, another 31% have only discussed it without putting policies in place.”
This is not a tooling issue—it’s a leadership failure.
CyberTech Insights Analysis: Governance Fails Before Technology Does
The data makes clear that AI risk is no longer constrained by technical capability—it is constrained by decision-making speed and accountability. Organizations are deploying AI far faster than they are defining who owns risk, how access is governed, and how failures are escalated.
When governance lags adoption, privacy incidents become inevitable rather than exceptional. Enterprises that close this gap treat AI governance as a board-level responsibility, align it with identity and data controls, and measure it continuously—not as a one-time policy exercise. Those that don’t often discover governance weaknesses only after customer trust, regulatory standing, or business value has already been compromised.
Privacy by Design—When Systems Fail, Not When They Work
Privacy by design has long been discussed. In 2026, it is being redefined. True “privacy by design” is not measured during normal operations. In 2026, it is measured during failure conditions. IBM’s research shows that breaches often remain undetected for months, while ransomware studies reveal that recovery capability, and not prevention alone, determines the scale of data exposure once systems are compromised.
True privacy-by-design is tested during:
- Credential compromise
- Ransomware recovery
- AI hallucinations
- API abuse
- Identity impersonation
- Insider misuse
As Chris Millington emphasizes, resilience—not perfection—is the new benchmark.
Organizations are redesigning systems with the assumption that controls will fail.
The differentiator is whether they can:
- Detect misuse early
- Contain the blast radius
- Restore data predictably
- Maintain operational continuity
- Prove what happened, when, and why
CyberTech Insights Analysis: Failure Is the Real Privacy Test
The data makes clear that privacy controls are only as credible as an organization’s ability to recover and explain failure. Long detection timelines and inconsistent recovery practices mean that many privacy incidents escalate not because controls were absent, but because resilience mechanisms were insufficient.
Privacy resilience now includes backup integrity, immutable storage, recovery testing, and identity-aware restoration processes. Without these capabilities, privacy claims collapse the moment systems behave unexpectedly—when regulators, customers, and boards demand proof rather than promises.
Designing Privacy Into Complex, Distributed Systems
As AI ecosystems grow more distributed, privacy must be engineered deliberately. IBM research shows that data complexity is now one of the top obstacles to secure AI adoption, while IDC data confirms that most enterprises operate across highly fragmented, multi-environment architectures, amplifying privacy risk when controls are not designed end-to-end.
Dan Balaceanu, CPO and Co-Founder at DRUID AI, puts it plainly:

“Data privacy is the first thing to consider when building an IT system—especially an AI solution. It is not a naïve architectural choice.”
Modern systems amplify complexity:
“IT solutions are composed of multiple services, often distributed, integrating LLM providers, vision providers, line-of-business applications, and automations.”
Responsibility cannot be abstracted away:
“Ensuring data privacy in such complex ecosystems requires expertise.”
The same applies to physical security.
Steve Van Till, Founder and President of Brivo, emphasizes trust:

“Technology, such as video surveillance and access control, keeps people, property and digital assets safe, but trust is an important part of the equation.”
Transparency is now expected:
“What’s the AI used for? What’s being recorded? Who can see it? How long is it stored?”
And privacy must be built in—not added later:
“Privacy and data protection are essential. They need to be built into the foundation of every physical security system.”
CyberTech Insights Analysis: Architecture Is a Privacy Decision
The data reinforces a critical reality: in distributed, AI-driven environments, privacy outcomes are determined at the architecture level. When systems span clouds, vendors, models, and physical infrastructure, retrofitting controls becomes nearly impossible.
As physical and digital infrastructures become more distributed, retrofitting privacy controls becomes nearly impossible. Organizations that architect privacy from the outset preserve control at scale, enforcing real-time governance across data flows, access boundaries, audit trails, and retention policies. Those that fail to do so typically discover privacy exposure only after data has escaped effective oversight.
Measuring and Proving Privacy Resilience
In 2026, privacy maturity is no longer defined by policy libraries or certification badges. It is defined by evidence.
Leading organizations are shifting from declarative privacy to verifiable privacy, using metrics such as:
- Mean time to detect identity misuse
- Time to contain unauthorized access
- Recovery time objectives tied to sensitive data
- Auditability of AI agent actions
- Visibility across identity, endpoint, network, and cloud layers
As Mark Wojtasiak and Kev Breen highlight, attackers now exploit speed, automation, and human trust. Defenders must match that tempo with continuous detection, behavioral analysis, and regular crisis simulations.
Privacy resilience is increasingly tested the same way resilience engineering is—through drills, red-team exercises, and post-incident reviews that focus on system behavior, not blame.
Governance Reality: Where Boards, CISOs, and Regulators Diverge
One of the clearest gaps exposed during Data Privacy Week 2026 is governance alignment.
As Monica Landen points out, AI adoption is accelerating far faster than AI governance. While some boards have formal policies in place, many remain in discussion mode—despite mounting regulatory, financial, and reputational risk.
Privacy incidents tied to AI are no longer hypothetical.
They are triggering:
- Regulatory investigations
- Contractual fallout
- Customer churn
- Executive accountability
Operational leaders are responding by elevating privacy to a board-level resilience issue, not a compliance update. The most mature organizations now treat AI governance, identity risk, and data privacy as part of enterprise risk management—not just security operations.
Speed, Scale, and the New Reality of Privacy Risk
Finally, Mike Baker, Global CISO at DXC Technology, highlights the urgency facing leaders:

“The rate of change with AI far exceeds what we saw with cloud.”
Organizations no longer have years to adapt:
“With AI, that urgency is eight, even 10-fold, where if you’re not on board in three to six months, you may never catch back up.”
Threats are already automated:
“In most cases, there aren’t legions of keyboard warriors behind these attacks, rather models manipulated to incessantly probe and penetrate at machine speed and scale.”
At this pace, privacy failures are no longer edge cases—they are stress tests that reveal whether governance, identity, and resilience were ever real to begin with.
Conclusion: Privacy That Holds When Trust Is Tested
Data Privacy Week 2026 makes one truth unavoidable: privacy is no longer theoretical—it is tested when identities are misused, when AI agents act unexpectedly, and when systems fail under real-world pressure.
The organizations that will earn trust in the AI era are those that treat privacy and security as foundational principles rather than afterthoughts—embedding governance, visibility, and accountability into every system from the start.
In an always-on, AI-driven world, privacy is no longer declared. It is demonstrated—under pressure.
To participate in upcoming interviews, please reach out to our CyberTech Media Room at info@intentamplify.com





