Data privacy is no longer a compliance checkbox or a once-a-year conversation. As organizations enter 2026, privacy has become a live operational discipline, tested continuously by AI-driven workflows, expanding digital identities, and attackers who no longer need to “break in” to cause damage.

Data Privacy Day, observed annually on January 28, serves as a global reminder of the growing responsibility organizations have to protect personal data in an AI-driven world. Rooted in the legacy of Convention 108, the 2026 theme—“Take Control of Your Data”—underscores a critical shift: privacy is no longer just about compliance but about trust, accountability, and sustainable digital growth.

Across industries—from healthcare and financial services to SaaS and critical infrastructure—the same reality is emerging: data privacy failures today are rarely caused by a single breach. They are the result of weak visibility, unmanaged access, fragmented controls, and an inability to detect, contain, and recover when systems behave in unexpected ways.

As leaders across cybersecurity, identity, and AI governance reflect on Data Privacy Week 2026, a clear theme emerges: privacy must be engineered for failure, not perfection.
 In an AI-driven, always-on enterprise, data privacy is no longer defined by static controls or annual compliance reviews—it is measured by resilience, visibility, governance, and the ability to recover under pressure.

To explore how organizations must rethink privacy in 2026 and beyond, CyberTech Insights invited global cybersecurity leaders, CISOs, and technology experts to share their perspectives on Data Privacy Week.

Our contributors include:

Cyber Resilience & Recovery

  • Chris Millington — Global Solutions Lead, Data & Cyber Resilience, Hitachi Vantara
  • Kev Breen — Senior Director of Threat Research, Immersive

AI Governance, Privacy & Automation

Identity, Access & Zero Trust

  • David Lee — Field CTO, Saviynt
  • Patrick Harding — Chief Product Architect, Ping Identity
  • Greg Wetmore — Vice President, Product Development, Entrust
  • Corey Nachreiner — Chief Security Officer, WatchGuard

Healthcare Data Privacy & Patient Trust

  • Sean Kelly — Chief Medical Officer & SVP, Customer Strategy, Imprivata
  • Martin Raison — Co-founder & CTO, Nabla

Enterprise Leadership, Culture & Trust

  • Ravi Soin — CIO & CISO, Smartsheet
  • Doug Kersten — Chief Information Security Officer, Appfire
  • Bobby Ford — Chief Strategy & Experience Officer, Doppel

Threat Detection, Networks & Behavior

  • Mark Wojtasiak — SVP, Product Research & Strategy, Vectra AI

Privacy-First Measurement & Digital Trust

From Prevention to Resilience: Why Recovery Is the New Privacy Imperative

Organizations have invested heavily in preventative security controls, yet breaches continue to escalate in frequency, cost, and business impact. According to Chris Millington, the focus must now shift toward resilience:

“Breaches can be business killers – on average, knocking out companies for at least 22 days. Regardless of what businesses do, bad actors find a way. The smart organizations in 2026 will prioritize resilience, taking a holistic approach to data management, understanding how point-in-time maps to ransomware resilience at the storage layer and building predictable recovery processes to keep their operations running when breaches occur.”

Millington’s point reflects a broader industry recalibration. The goal is no longer to promise zero breaches but to ensure operations, data integrity, and trust survive the breach.

“The more opportunistic attackers become, the more organizations need to be ready to recover from attacks – rather than just focusing on preventative measures. In the year ahead, look for organizations to shift more of their spending from prevention toward building resilience by strengthening their ability to restore data, maintain operations and minimize business disruption. Adopting a secure-by-design approach is no longer optional—it’s becoming mandatory.”

Chris Millington is Hitachi Vantara’s Global Solutions Lead, Data and Cyber Resilience.

Privacy Fails When Visibility Fails

Modern enterprises no longer control data within a single perimeter. Data now moves continuously across identities, SaaS platforms, APIs, clouds, and AI-driven systems—often faster than security teams can observe.

As Mark Wojtasiak explains:

“Privacy failures don’t start with a single breach. They happen when organizations can’t see or respond fast enough as behavior changes.”

Static controls and one-time assessments cannot protect dynamic systems. True privacy-by-design depends on continuous detection and response:

“Privacy by design only works when teams can detect abnormal access early, contain misuse quickly, and limit blast radius when controls inevitably fail. In resilient organizations, privacy isn’t something you hope holds—it’s something you can measure and prove under pressure. The ability to detect, contain, and recover from misuse is what ultimately determines whether personal data stays protected in an AI-driven world.”

In 2026, privacy maturity is increasingly measured not by policies but by how quickly organizations can prove control under pressure.

Mark Wojtasiak is the SVP of Product Research and Strategy at Vectra AI.

AI Governance and the New Privacy Surface

AI has dramatically expanded the privacy attack surface. Data is no longer just stored or processed—it is ingested, transformed, reused, and acted upon autonomously.

According to Diana Kelley, privacy and AI governance are now inseparable:

“AI privacy governance requires that organizations understand nuances such as which data sets are being used by AI models, what sensitive information employees are entering into those systems, and how that data may persist, propagate, or be reused.”

The rise of agentic AI further complicates the trust model:

“Agentic AI systems flatten traditional trust boundaries, creating new paths for unintended data exposure and exfiltration.”

Manual oversight simply cannot keep pace:

“AI, automation, and governance now play a central role in scaling privacy responsibly. Manual processes cannot keep up with the speed and complexity of modern data environments, especially as AI systems continuously ingest, transform, and act on data. In practice, this means organizations are using automated discovery to identify sensitive data flowing into AI models, enforcing policies that restrict what employees and applications can submit to those systems, and continuously monitoring for privacy violations as models and agents evolve.”

Diana Kelley is the CISO at Noma Security.

Identity Is the Control Plane for Data Privacy

Across nearly every breach, one constant remains: misused or over-privileged access.

As David Lee puts it plainly:

“AI doesn’t create new security problems; it exposes the ones we already ignored… Identity is the control plane for data access, whether that access comes from a person, an application, or an AI agent.”

Without governance, visibility, and accountability around access:

“You don’t have a data protection strategy; you have hope.”

David Lee is the Field CTO at Saviynt.

Privacy as Culture, Not Compliance

For enterprise leaders, privacy is increasingly about trust, accountability, and organizational behavior.

As Ravi Soin notes:

As we mark Data Privacy Week 2026, we must recognize that privacy isn’t something we can check off our list once a year. It’s a fundamental right that requires our constant attention and action. We need to go beyond awareness campaigns and make privacy a core part of everything we do—how we design products and systems, how we manage security and risk, how we choose and oversee vendors, and how we lead our teams and shape our culture.”

Vendor transparency is now non-negotiable:

“Customer data belongs to customers. Period.”

Ravi added, “Prioritizing data privacy pays dividends: it helps reduce exposure to security threats and data leakage as AI scales, and it reinforces confidence in the organization, strengthening customer trust.”

Ravi Soin is the CIO/CISO at Smartsheet.

Healthcare: Where Privacy Failure Becomes Patient Risk

Few sectors feel the stakes of privacy failure more acutely than healthcare. According to Dr. Sean Kelly:

“Few industries demand higher standards for data privacy than healthcare, where protecting sensitive patient health information is fundamental to building patient trust and ensuring continuous care delivery. As one of the most targeted industries for cyberattacks, the stakes are even higher, yet hospitals’ security practices lag behind. Outdated access management strategies like passwords are no longer enough to protect patient privacy and secure healthcare data. Our research shows that 60% of health systems still rely heavily on passwords for user authentication, with more than 40% linking them directly to increased risk of breach.”

Password-centric models undermine both security and care delivery:

“This Data Privacy Week should serve as a wake-up call… Shifting to identity-centric access models helps strike the right balance.”

Dr. Sean added, “Without modern access controls that reliably verify user identity and limit access to the right people at the right time, patient privacy remains at risk. The challenge is strengthening protections without disrupting clinical workflows, since friction often leads to workarounds that can undermine security. Equally important is the ability to detect and respond to identity-based threats in real time, before compromised access escalates into a broader breach of sensitive healthcare data.  This Data Privacy Week should serve as a wake-up call for healthcare organizations: password-heavy workflows are not only increasing risk but also fueling frustration and burnout. Shifting to identity-centric access models helps strike the right balance, reducing friction without introducing new vulnerabilities. By moving toward a passwordless future, healthcare leaders can lower risk, simplify workflows, and lay a stronger foundation for what comes next.”

Dr. Sean Kelly is the Chief Medical Officer and SVP of Customer Strategy at Imprivata.

The Future of Privacy Is Continuous, Verifiable Trust

From identity ecosystems to adaptive verification, leaders agree that privacy must evolve into a measurable, provable state.

Patrick Harding, Chief Product Architect of Ping Identity said,“This week offers an opportunity to pause and assess the rapidly evolving landscape of digital trust, as privacy really boils down to choice and trust around how personal data is being used. Data privacy is no longer a passing concern for consumers – it has become a defining factor in how they judge brands, with three-quarters now more worried about the safety of their personal data than they were five years ago, and a mere 14% trusting major organizations to handle identity data responsibly. Whether it’s social engineering, state-sponsored impersonation, or account takeover risks, AI will continue to test what we know to be true. As threats advance and AI agents increasingly act on behalf of humans, only the continuously verified should be trusted as authentic.

For businesses, the path forward is clear: trust must be earned through transparency, verification, and restraint in how personal data is collected and used. The businesses that adopt a “verify everything” approach that puts privacy at the center and builds confidence across every identity, every interaction, and every decision, will have the competitive edge.”

As Patrick Harding explains:

“Only the continuously verified should be trusted as authentic.”

And as  Entrust’s Greg Wetmore adds:

“Modern security systems build trust over time… replacing rigid perimeter-based checkpoints with adaptive, intelligence-driven systems.”

Greg explains, “Data security has evolved from isolated checkpoints, like passwords or MFA, into an interconnected, agile identity ecosystem. Having more digital presence does not automatically mean more risk; smart digital identity design can actually make you safer.

Real identity resilience comes from synergy across layers of biometric, behavior, historical, and device data. Instead of relying on a single moment of verification, modern security systems build trust over time, creating a baseline for future activity and adapting as the context changes and as threats evolve. The future of digital trust depends on a multi-layered approach to protecting identities that replaces rigid perimeter-based checkpoints with adaptive, intelligence-driven systems that put the control back in the hands of everyday individuals.”

Greg Wetmore is the Vice President of Product Development at Entrust.

Social Engineering, Shadow AI, and the Human Factor

As attackers increasingly exploit trust rather than vulnerabilities, privacy risk becomes as much a human problem as a technical one.

From AI-enabled social engineering to shadow AI adoption, leaders warn that unmanaged innovation accelerates exposure.

“Blind trust in AI outputs… is allowing sensitive data to slip beyond organizational control,” warns Fernando Martinez Sidera.

“Technology alone is not enough,” adds Kev Breen. “People remain a primary attack vector.”

Kev Breen said, “Data privacy remains one of the most significant business risks organizations face, as attackers increasingly focus on stealing large volumes of sensitive data with minimal effort. Once exposed, that data is routinely reused for phishing and social engineering, creating lasting consequences for customers and organizations alike.”

Kev added, “These incidents show that technology alone is not enough. Social engineering continues to grow more sophisticated, making people a primary attack vector even in environments with strong technical controls. In 2025, rapid adoption of generative AI further expanded risk, as organizations rushed to deploy tools that give employees and systems direct access to internal data. Architectures such as RAG-enabled chat interfaces were often implemented without sufficient safeguards, leading to accidental exposure through prompt injection and misuse.”

“At the same time, long-standing weaknesses in data access controls continue to surface in other parts of the attack surface. A recent example was the suggestion that Instagram had suffered a major data breach. In reality, reports pointed to abuse of legitimate API access, where large volumes of improperly constrained data were scraped and later sold. While technically different from a breach, the outcome was the same: sensitive information at scale ended up in criminal hands. As Data Privacy Week 2026 reminds us, protecting data isn’t just about keeping bad actors out. It’s about battle-testing teams so they can recognize exposure risks early, respond effectively under pressure, limit damage, and recover quickly when a cyber crisis inevitably occurs,” concluded Kev.

Kev Breen is the Senior Director of Threat Research at Immersive.

Conclusion: Privacy That Survives Reality

Data Privacy Week 2026 highlights a defining shift: privacy is no longer about preventing every incident—it’s about surviving them.

The organizations that will earn trust in the AI era are those that can see clearly, act quickly, and recover predictably—across identities, data, systems, and people.

As Appfire’s CISO, Doug Kersten summarizes:

“Security and privacy can no longer be treated as a purely technical problem—it must be embedded into daily operations and everyday behavior.”

In an AI-driven world, privacy is no longer promised—it is proven.

Coming Next: Data Privacy Week 2026 — Part Two

In Part Two, CyberTech Insights will go deeper into the operational realities behind today’s privacy challenges. We’ll examine how organizations are turning privacy from principle into practice—from governing agentic AI and eliminating shadow access to redesigning identity architectures and proving resilience under real-world attack conditions.

The next installment will explore:

  • How identity-first security is reshaping data protection in AI-driven enterprises
  • Why agentic AI and shadow AI are redefining privacy risk models
  • What “privacy by design” looks like when systems fail—not when they work
  • How leading organizations are measuring, testing, and proving privacy resilience
  • Where regulators, CISOs, and boards are aligning—and where gaps remain

As data privacy moves from policy to pressure-tested reality, Part Two will focus on what actually works when trust is on the line.

To participate in upcoming interviews, please reach out to our CyberTech Media Room at info@intentamplify.com