Cyber resilience now stands as a boardroom mandate rather than a technical afterthought. Adversaries innovate at relentless velocity, exploiting scale, automation, AI-driven tooling, plus human miscalculation. In such an arena, preparedness demands proof rather than presentation.
Welcome to the CyberTech Top Voice Interview Series, highlighting architects of modern cybersecurity strategy, AI-era defense, plus workforce resilience.
I’m Sudipto Ghosh, hosting a conversation featuring Kev Breen, Senior Director of Cyber Threat Research at Immersive. Spanning more than two decades across military cyber defense, malware reverse engineering, open-source tooling, plus advanced threat intelligence, Kev bridges adversarial tradecraft plus enterprise defense discipline.
Early service within the Royal Signals forged deep operational rigor, defending UK Ministry of Defence networks against sophisticated campaigns. Subsequent open-source contributions — including defensive toolkits plus widely adopted YARA rules recommended by VirusTotal — elevated community-wide detection capabilities while drawing attention from both defenders plus threat actors.
At Immersive, Kev leads applied research focused on enabling organizations to assess, strengthen, plus validate cyber workforce resilience against emerging threat patterns. His work emphasizes measurable readiness under pressure, especially as AI-assisted attack campaigns reshape modern enterprise risk.
This discussion examines the evolution of cloud-centric attack surfaces, AI-driven acceleration across offensive plus defensive operations, the transformation of vulnerability management priorities, plus the rising importance of immersive crisis simulation for enterprise leadership.
Here’s the full interview.
Hi Kev, welcome to the CyberTech Top Voice interview series. Tell us a bit about your role at Immersive and how you contribute to the cybersecurity innovation landscape.
I’m the Senior Director of Cyber Threat Research at Immersive. I help organisations assess, build, and prove their cyber workforce resilience against new and emerging threats.
In my current role, I focus on applied research that helps organisations better understand and defend against real-world threats, building on years of open-source work that includes defensive tooling for malware analysis and widely adopted YARA rules.
Recommended CyberTech Interview: CyberTech Top Voice: Interview with Roman Kilun, Chief Compliance Officer at ABBYY
Given your background in both military cyber defense and civilian threat research, how has your perspective on emerging threats evolved over the past decade, especially in the context of cloud-centric enterprises?
Across both military cyber defence and civilian threat research, one reality remains constant in that threat actors, whether nation-state or e-crime, rapidly adopt new technologies and weaponize them. As organizations shifted from tightly controlled networks to cloud and hybrid-cloud environments, attackers evolved in parallel, exploiting scale, misconfiguration, and human error rather than relying on any single piece of malware.
In the early 2010s, indiscriminate browser-based exploit kits delivered infostealers and banking trojans to individuals because that was where value lay. Today, ransomware groups instead exploit zero- and n-day vulnerabilities in edge devices to compromise entire networks, while also adapting their tooling to target cloud and virtual assets.
The early 2020s provided a stark illustration of this shift. The “Meow bot” attacks targeted publicly exposed and unprotected Elasticsearch and MongoDB instances, compromising tens of thousands of cloud-hosted data stores at scale. These were not sophisticated zero-day operations; they were automated campaigns exploiting basic exposure and misconfiguration. The lesson was clear: in cloud-centric enterprises, scale amplifies both innovation and risk. When defensive fundamentals lag behind rapid adoption, attackers capitalize immediately.
In your view, how has the threat landscape changed as AI and automation become more integrated into both attack campaigns and defensive tooling? Are there misconceptions around AI security readiness?
The most meaningful change in the threat landscape isn’t fully autonomous attacks; it’s speed and scale. In practice, attackers are using AI much like defenders are: as an assistant. It helps generate phishing content, write snippets of malware, and reduce friction, but it doesn’t replace human direction. Campaigns still require judgment, tradecraft, and decision-making to succeed.
From a defender’s perspective, that distinction is largely invisible anyway. You can’t tell whether an alert, email, or payload was generated by a human, a script, or an AI model — and it doesn’t really matter. What matters is how teams respond and whether they can interpret incomplete signals, make decisions under pressure, and act quickly when context is limited.
Defensive AI agents are beginning to take on roles similar to SOC analysts, triaging alerts with the goal of reducing alert fatigue and allowing analysts to focus on high-priority tasks. The risk, however, is that these models may not understand your specific network. To know what is bad, you first need to know what is normal, and generic LLMs fed only alerts may not have enough contextual information within their context window to make that distinction.
Where organisations tend to misjudge their readiness is assuming that adopting AI tools equates to being prepared. We often see high confidence that fades during real incidents, especially when sensitive data is exposed at scale. Attackers increasingly focus on stealing large volumes of data efficiently, knowing it can be reused for phishing and social engineering long after the initial breach. That’s where the real gap shows up — not in tooling, but in how prepared teams are to respond when it counts.
What are the most significant gaps you currently see in how organizations build cyber workforce resilience? What practical steps can security leaders take today to close them?
The largest gap in cyber resilience arises from organisations training for awareness, not for reality. People know what “good” looks like on paper, but they haven’t been tested in situations where the signals are messy, time is limited, and the right answer isn’t obvious. When pressure hits, that gap shows up very quickly.
The practical fix isn’t complicated, but it does require a mindset shift. Teams need to be exercised regularly in realistic scenarios, not just trained once and signed off. Leaders also need to measure how people actually perform under pressure, then focus training on the gaps that show up. Upskilling isn’t a “nice to have” or a retention perk, it’s how you reduce risk when security inevitably breaks.
Recommended CyberTech Interview: CyberTech Top Voice: Interview with Anna Collard, SVP Content Strategy and CISO Advisor, KnowBe4 Africa
Research teams often publish open-source tooling and YARA rules that are consumed by defenders and adversaries alike. How do you balance openness with the risk of enabling misuse at Immersive?
Open research is still important for defense. Sharing tools and detection logic helps the wider community defend itself faster, and pulling everything behind closed doors doesn’t make anyone safer. But openness has to be deliberate.
The balance comes from focusing on what defenders need, not what makes an attack easier to reproduce. That means being careful about how much operational detail you publish, adding context so rules aren’t misused, and thinking through how something could be abused before releasing it.
The goal isn’t secrecy — it’s making sure that what you share raises the bar for defenders more than it lowers the bar for attackers.
Looking at vulnerability management today — particularly around widely used platforms like Microsoft and cloud services — what trends should CISOs prioritize in 2026?
What stands out is that attackers still aren’t doing anything particularly new. Most incidents come back to the same issues we’ve seen for years: known vulnerabilities, weak credentials, misconfigurations, and overly broad access.
What has changed is how easy it is to turn those weaknesses into impact. Attackers are increasingly focused on getting access to large volumes of sensitive data with very little effort. Sometimes that’s through a traditional vulnerability; other times it’s through abuse of legitimate features like APIs or integrations. Technically, it might not always be called a breach, but the outcome is the same. Sensitive data ends up in the wrong hands and gets reused over and over again for phishing and social engineering.
For CISOs, the priority in 2026 shouldn’t just be patching faster. It should be limiting data exposure, tightening access paths, and making sure teams can spot misuse early and respond quickly when controls fail, because they will.
What role do interactive, skills-based platforms play in closing the cyber skills gap compared to traditional certification paths? How should organizations measure success in workforce development?
Certifications are useful, but they tell you what someone knows rather than how they’ll behave when something actually goes wrong. Interactive, skills-based platforms fill that gap by putting people into realistic situations where they have to interpret signals, make decisions, and act under pressure.
That matters, especially in data exposure scenarios, where the difference between a minor incident and a major one often comes down to how quickly someone recognizes what’s happening and responds. Completion rates and awareness scores don’t tell you that.
Success should be measured by performance: how accurately teams respond, how quickly they contain issues, and whether they get better over time. If you can’t see that improvement, you don’t actually know how ready you are.
Looking ahead, how do you see the definition of “cyber crisis simulation” evolving over the next five years, and what role will platforms like Immersive Labs play in that shift?
Crisis simulation is moving away from static tabletop exercises and toward hands-on, high-pressure scenarios that look much more like real incidents. Talking through a response is very different from having to execute one with incomplete information and a clock ticking.
Over the next few years, simulations will focus more on how teams handle ambiguity, how they coordinate across functions, and how quickly they can limit damage. This is especially true with incidents involving data exposure. That’s where readiness either holds up or falls apart.
Platforms like Immersive empower organisations to battle-test these scenarios at scale, measure performance, and prove readiness before a real crisis occurs.
For product developers and security technologists entering AI-led cybersecurity roles, what mindset shifts and skills will be critical to succeed as systems become more autonomous and adversarial?
The biggest mindset shift is accepting that AI doesn’t remove humans from the equation; it puts more pressure on them. As systems become more automated, people are left to make the harder calls, often with less certainty and less time.
That means understanding where AI works well and where it doesn’t. Agentic systems can speed things up and reduce repetitive work, but they can also hallucinate, make poor assumptions, or introduce subtle logic flaws. That’s why human oversight is critical, especially at key decision points.
The people who do well in this space will be the ones who can think critically about AI outputs, understand data flows and failure modes, and step in when automation starts to drift. Technology will keep evolving. Judgment and adaptability are what will make the difference.
Recommended CyberTech Interview: CyberTech Top Voice: Interview with Conor Sherman, CISO at Sysdig
Thank you so much, Kev, for answering all our questions! We look forward to having you again at the CyberTech Top Voice program.
To participate in our interviews, please write to our CyberTech Media Room at info@intentamplify.com




