ESET has introduced a new set of AI-driven security capabilities aimed at protecting enterprise chatbot communications and AI-powered workflows. The company showcased these innovations at RSAC 2026, with plans to officially roll them out later this year. These enhancements will extend the visibility of the ESET PROTECT Platform, allowing organizations to better monitor and manage risks associated with everyday AI usage and the growing adoption of agentic AI.

As businesses increasingly integrate AI tools into daily operations, security challenges continue to rise. Highlighting this shift, Juraj Jánošík, ESET Director of Artificial Intelligence stated, “As companies rely more on AI for productivity and automation, they face growing risks around sensitive data exposure, compliance violations, and misleading outputs,”

He further emphasized, “Agentic AI is shifting the security battlefield back to the endpoint. ESET has spent over 30 years building leading endpoint protection powered by AI and machine learning, so we’re uniquely positioned to help organizations secure this next wave of AI right where it starts.”

Moreover, the rapid adoption of AI tools—especially cloud-based chatbots—has introduced new risks. Many employees unknowingly use these tools without IT supervision, leading to what experts call “shadow AI.” Consequently, sensitive information such as internal documents, API keys, credentials, and confidential data may become exposed.

To address these concerns, ESET has implemented advanced technologies that operate close to the data source. For instance, its secure browser solution actively intercepts AI interactions, analyzing both user prompts and chatbot responses in real time. As a result, organizations can prevent data leaks and identify malicious or misleading content before it impacts users.

During live demonstrations at RSAC 2026, ESET’s new feature successfully flagged malicious URLs submitted through chatbot prompts. Additionally, it recorded endpoint activity and displayed it within the ESET PROTECT Platform for further investigation. This capability also extends to detecting prompt injection attacks, suspicious scripts, and sensitive data inputs. Therefore, security teams can enforce policies more effectively by blocking or monitoring risky behavior.

At the same time, the rise of agentic AI introduces broader “AI supply chain” risks. These include compromised frameworks, trojanized libraries such as LiteLLM, and autonomous agents like OpenClaw that can execute system-level actions with minimal oversight. ESET acknowledged the increasing frequency of such threats and reaffirmed its commitment to advancing research in this area.

In addition, ESET launched a free AI Skills Checker at RSAC 2026. This tool evaluates AI-generated content for hidden instructions, malicious code, and unsafe behaviors using multi-layered inspection and cloud-based sandboxing. It is available to both existing users and non-customers.

Finally, ESET continues to collaborate with leading technology players, including OpenAI, Amazon, Microsoft, and Anthropic through the Agentic AI Foundation. Together, they aim to establish secure standards and protocols for AI communication, ensuring safer and more reliable AI ecosystems in the future.

Recommended Cyber Technology News:

To participate in our interviews, please write to our CyberTech Media Room at info@intentamplify.com  



🔒 Login or Register to continue reading