NCSC urges industry to secure “vibe coding” as AI-generated software adoption accelerates

This week at the RSA Conference, the UK’s National Cyber Security Centre (NCSC) CEO Richard Horne called on the cybersecurity community to develop safeguards around “vibe coding” as adoption of AI-assisted development tools continues to grow and presents both opportunities and risks.

Horne stated that while AI-generated code could help reduce vulnerabilities if implemented securely, it also has the potential to introduce or propagate weaknesses if not properly designed and reviewed. The NCSC emphasized that AI development tools must be secure by design and trained to avoid generating insecure code, as part of a broader effort to improve software security outcomes.

The agency also noted that the rapid growth of AI-assisted development is expected to drive wider adoption of “vibe coding,” making it critical for security professionals to establish controls and best practices early. The NCSC said the industry has both the opportunity and responsibility to ensure that AI-driven software development results in more secure systems over time.

   “To combat this “multi-dimensional” threat, our collective approach to defending our societies must match that, likening cyber defense to a full court press in basketball, where “collective pressure from all actions together” can have greatest impact,” Horne said.

Rajeev Raghunarayan, Head of GTM, Averlon had this to say:

   “Richard Horne is right to flag vibe coding as a security concern. The deeper risk is what it does to the underlying environment. More AI-generated code means more updates, more dependencies, and faster change across systems that security teams are still struggling to keep pace with.

   “The challenge isn’t just whether AI generates insecure code. Environments no longer stay stable long enough to evaluate risk the way teams operated traditionally through point-in-time scans, static prioritization, and backlog-driven remediation. Security must move at the same pace as the introduced changes, meaning it must evaluate and address risk as it happens, not weeks or months later.”

Ryan McCurdy, VP of Marketing, Liquibase adds this comment:

   “AI compresses the time between idea and production, raising the stakes for change control. When database changes reach production without policy enforcement, approvals, drift detection, and auditability, companies multiply risk with every release. The consequences show up in outages, compliance exposure, slower incident response, and inconsistent data that weakens execution across the business.

   “Leaders who govern change well can scale AI with more control, protect business-critical operations, and accelerate transformation without increasing operational risk.”

Michael Bell, Founder & CEO, Suzu Labs follows with this comment:

   “The NCSC’s Richard Horne is right that the cybersecurity community needs to get ahead of vibe coding rather than fight adoption. The commandments his team published at RSA this week are all individually correct. Secure model defaults. AI code reviews. Deterministic guardrails. Secure hosting. But treating them as a checklist misses how security actually works. No single control catches everything.

   “Vibe coding security needs to be defense in depth. Security checks at the model layer, at pre-commit, at the build pipeline, at deployment, and at runtime. Each layer catches what the previous one missed. We’ve already seen what happens when security depends on one check. When researchers examined vibe-coded applications, 10% of apps on one platform had the exact same security misconfiguration, and broader research shows only 10.5% of AI-generated code is secure even when 61% is functionally correct.

   “The NCSC’s CTO imagined a future where AI code ends up more locked down than any SaaS product ever was. That’s achievable. But only if we build layered security infrastructure to match the speed of AI-assisted development. One check at one stage is a half-court trap. The adversary gets around it. Defense in depth is the full court press.”

There a dangers in terms of using AI to write code. Organizations need to be aware of that and take the right mitigations before something really bad happens. And I do mean really bad.

Leave a Reply

Discover more from The IT Nerd

Subscribe now to keep reading and get access to the full archive.

Continue reading