Apiiro has provided insights into how generative AI coding tools are accelerating development while simultaneously increasing security risks.

This research found that generative AI tools have supercharged coding velocity while putting sensitive data like Personally Identifiable Information (PII) and payment details at significant risk.

As organisations increasingly adopt AI-driven development workflows, the need for robust application security and governance is becoming ever more critical.

Cyber Technology Insights : Cynamics Secures FedRAMP Authorization as a Managed Service on the CGC Platform

AI coding tools spur productivity

Generative AI tools have become mainstream in software engineering since OpenAI introduced ChatGPT in late 2022. Microsoft, the parent company of GitHub Copilot, reports that 150 million developers now use its coding assistant—a 50% increase over the past two years.

Apiiro’s data indicates a 70% surge in pull requests (PRs) since Q3 2022, far outstripping repository growth (30%) and the increase in developer counts (20%). These statistics highlight the dramatic impact of AI tools in enabling developers to produce significantly more code in shorter timeframes.

Yet, this explosion in productivity comes with an unsettling caveat: an increase in application security vulnerabilities.

Faster development comes at a price
The sheer volume of AI-generated code is magnifying risks across organisations, according to Apiiro’s findings.

Sensitive APIs exposing data have almost doubled, reflecting the steep rise in repositories created by generative AI tools. With developers unable to scale as fast as code output, in-depth auditing and testing have suffered, creating gaps in security coverage.

“AI-generated code is speeding up development, but AI assistants lack a full understanding of organisational risk and compliance policies,” the report notes. These shortcomings have led to a “growing number of exposed sensitive API endpoints” that could potentially jeopardise customer trust and invite regulatory penalties.

Gartner’s research corroborates Apiiro’s findings, suggesting that traditional, manual workflows for security reviews are increasingly becoming bottlenecks in the era of AI coding. These outdated systems are hindering business growth and innovation, says the report.

Cyber Technology Insights : BackBox Unveils BackBox 8.0: Revolutionizing Network Cyber Resilience with a Unified View

Threefold spike in PII and payment details exposure

Apiiro’s Material Code Change Detection Engine revealed a 3x surge in repositories containing PII and payment details since Q2 2023. Rapid adoption of generative AI tools is directly linked to the proliferation of sensitive information spread across code repositories, often without the necessary safeguards in place.

This trend raises alarm bells as organisations face a mounting challenge in securing sensitive customer and financial data. Under stricter regulations like GDPR in the UK and EU, or CCPA in the US, mishandling sensitive data can result in severe penalties and reputational harm.

10x growth in APIs missing security basics

Perhaps even more worrisome is the rise in insecure APIs. According to Apiiro’s analysis, there has been a staggering 10x increase in repositories containing APIs that lack essential security features such as authorisation and input validation.

APIs serve as a critical bridge for interactions between applications, but this exponential growth in insecure APIs highlights the dangerous downside of the speed-first mentality enabled by AI tools.

Insecure APIs can be exploited for data breaches, malicious transactions, or unauthorised system access—further boosting already-growing cyber threats.

Cyber Technology Insights : AvePoint Confidence Platform Adds New ROI and Resilience Command Centers

Why traditional security governance is failing
The report stresses the need for proactive measures as opposed to retroactive ones. Many organisations are struggling because their traditional security governance frameworks cannot keep up with the scale and velocity of AI-generated code.

Manual review processes are simply not equipped to manage the growing complexities introduced by AI code assistants. For instance, a single pull request from an AI tool might generate hundreds or even thousands of lines of new code, making it impractical for existing security teams to review each one.

Consequently, organisations find themselves accumulating technical debt in the form of vulnerabilities, sensitive data exposure, and misconfigured APIs—each of which could be exploited by attackers.

Need for caution in the era of AI coding tools
While tools like GitHub Copilot and other GenAI platforms promise unprecedented productivity, Apiiro’s report clearly demonstrates an urgent need for caution.

Organisations that fail to secure their AI-generated code risk exposing sensitive data, breaching compliance regulations, and undermining customer trust.

Generative AI offers an exciting glimpse into the future of software engineering, but as this report makes clear, the journey to that future cannot come at the expense of robust security practices.

Cyber Technology Insights : Saviynt Appoints Palo Alto Networks, Citrix Exec Steve Blacklock as Channel Chief

To participate in our interviews, please write to our CyberTech Media Room at sudipto@intentamplify.com