As artificial intelligence adoption accelerates, cybersecurity leaders increasingly stress the importance of visibility. Echoing the wisdom of Peter Drucker, the principle remains clear—organizations cannot manage or secure what they fail to measure or even detect. In today’s AI-driven ecosystem, this idea has become more relevant than ever. Companies like Netskope are reinforcing this approach by enabling deep visibility across cloud, data, and user activity—helping enterprises stay ahead of evolving threats.

Currently, enterprises are rapidly integrating AI agents and tools into their workflows. However, these systems often operate with minimal oversight, accessing sensitive data and executing tasks across multiple platforms. While many organizations initially focused on securing north-south traffic—data moving between users and applications—a more complex challenge has emerged. East-west traffic, generated by AI tools interacting across systems internally, now represents a critical and often overlooked risk vector.

According to Tony Burnside, SVP for APJ at Netskope, organizations must urgently address this evolving threat landscape. “It’s a challenge that organisations must address quickly,” he says. “Current tools don’t give the visibility that teams demand into north-south and east-west traffic. I think the east-west traffic with MCP [Model Context Protocol] is growing faster than ever.”

Furthermore, the rapid rise of AI agents mirrors the earlier challenges of shadow IT. In the past, employees introduced unauthorized applications into enterprise environments. Today, they deploy AI tools—often without fully understanding the risks involved. This shift significantly increases exposure to vulnerabilities.

Burnside highlights the growing concerns around Model Context Protocol (MCP), particularly the risks tied to contextual data misuse. He emphasizes, “Netskope has always had a deep understanding of data and the context around it. When you come to MCP there’s things like context poisoning, where malicious data is injected into the communication, trying to get the AI to do something it shouldn’t. That might be invoking privileged tools, data over-collection, or trying to access sensitive information. What if one MCP server is compromised and contaminates and compromises others?”

To mitigate these risks, experts recommend decoupling security frameworks from AI models. Although AI systems may include built-in protections, organizations need a unified security fabric that enforces consistent policies across all tools—without sacrificing performance.

Moreover, Burnside stresses that security controls must evolve beyond traditional approaches. “Security controls need to be omni-directional. Organisations need to ensure users are not sending sensitive information such as PII, health information or intellectual property to tools outside the corporate security bubble. This is where robust DLP [data loss prevention] tools are critical.”

In addition, relying solely on keyword-based detection is no longer sufficient. Instead, advanced DLP systems must analyze behavioral patterns and contextual signals to identify threats effectively.

From a network standpoint, while proxy-based architectures still play a role, exceptions and bypass mechanisms often introduce vulnerabilities. Burnside warns, “If there’s any exception or bypass you create a potential vulnerability. You really cannot have trade-offs between security and performance. You must have security without adversely affecting performance. But organisations need to reduce their risk as well. Architecture matters and you must architect it right so that you can see everything and control everything.”

As AI continues to evolve, traditional tools alone cannot address emerging risks. New solutions, such as AI-specific gateways, allow organizations to enforce controls without routing sensitive data externally. Additionally, integrating AI signals into Security Operations Centers (SOCs) enhances threat detection and response.

Burnside concludes by emphasizing innovation alongside security: “You’ve got to allow organisations to innovate. They’re doing some really great things with AI, but you’ve got to secure it and give them the performance they need.”

Recommended Cyber Technology News:

To participate in our interviews, please write to our CyberTech Media Room at info@intentamplify.com  



🔒 Login or Register to continue reading