Cybersecurity researchers have been warning for quite a while now that generative artificial intelligence (GenAI) programs are vulnerable to a vast array of attacks, from specially crafted prompts that can break guardrails, to data leaks that can reveal sensitive information.

Check CyberTechs Latest News : Neo4j SQream Blue: New Cost-Performance Standards for Data

The deeper the research goes, the more experts are finding out just how much GenAI is a wide-open risk, especially to enterprise users with extremely sensitive and valuable data. 

Also: Generative AI can easily be made malicious despite guardrails, say scholars

“This is a new attack vector that opens up a new attack surface,” said Elia Zaitsev, chief technology officer of cyber-security vendor CrowdStrike, in an interview with ZDNET.

“I see with generative AI a lot of people just rushing to use this technology, and they’re bypassing the normal controls and methods” of secure computing, said Zaitsev. 

“In many ways, you can think of generative AI technology as a new operating system, or a new programming language,” said Zaitsev. “A lot of people don’t have expertise with what the pros and cons are, and how to use it correctly, how to secure it correctly.”

To share your insights with CyberTech Newsroom, please write to us at news@intentamplify.com