In the age of vibe coding, trust is the real bottleneck

· Fortune

Welcome to Eye on AI, with AI reporter Sharon Goldman. In this edition: Microsoft CFO’s AI spending runs up against tech bubble fears…How AI helped one man (and his brother) build a $1.8 billion company…Apple escalates crackdown on vibe coding apps.

Visit betsport.cv for more information.

AI can now write code faster than a human can possibly type. With “vibe coding” tools like Anthropic’s Claude Code and OpenAI’s Codex, developers are gleefully building—and shipping—at a pace that would have been unthinkable just a year ago. Even Claude Code’s creator, Boris Cherny, has boasted that the latest version was written entirely by—yes—Claude Code.

But while vibe coding may be fast, it can also introduce subtle bugs and vulnerabilities. And human error hasn’t gone away: Claude Code is now under scrutiny after its own source code was accidentally leaked this week due to a packaging mistake.

For enterprises, these kind of vulnerabilities are a nonstarter. At large companies with sprawling codebases, it’s not just about writing code faster—it’s about ensuring that code is correct, secure, and compliant with internal systems and external obligations. As AI tools begin to generate production-ready code automatically, the bottleneck is shifting from writing software to verifying it. And at enterprise scale, where millions of code changes can flow through a system each year, even small errors can quickly compound into major risks.

That got me thinking about an interview I did two years ago with Itamar Friedman, cofounder and CEO of Qodo, an AI code review tool that has just raised $70 million to tackle what he calls the growing problem of “AI slop” in codebases.

When I first spoke to Friedman in early 2024, when the company was called CodiumAI, he talked about “flow engineering”—a system where one model generates code and another critiques it, adding layers of testing and reflection. But even then, it was clear that generating code was considerably easier than making sure it is accurate and works well, and that “code integrity” was key. 

In a chat with Friedman yesterday, he argued that today’s AI coding tools, powered by LLMs, are designed to complete tasks, not to question them—making a separate “governance and trust layer” essential to determine what should (and shouldn’t) ship.

“AI is not enough when you’re talking about real-world software quality and code governance,” he said. “What you need, actually, is official wisdom.” He explained that as a developer in a big organization, creating quality code isn’t just about being smart. It’s about knowing how a specific company does things—all the tribal knowledge within the organization. 

Qodo, he explained, analyzes how developers in an organization actually write and review code—looking at pull requests, comments, and past changes—and turns that into a set of rules that define what “good” looks like for that company. Those rules are then enforced automatically, flagging new code that violates them.

In the age of AI, the challenge for enterprises is that they want to move faster, but don’t have the freedom to change their codebases unless they can be sure that code will remain trustworthy. 

“That’s the gap we’re trying to close,” said Friedman, who spent three years as a director of machine vision at Alibaba before launching what is now Qodo in 2022, just a few months before ChatGPT launched. Qodo clients, including Walmart, Nvidia, Ford and Texas Instruments, want to move fast, he explained, but they also know their systems depend on layers of accumulated knowledge and constraints. 

Today’s vibe coding landscape, he added, overestimates how much these tools can be trusted in the short term—and underestimates how much a trust layer is needed to make them viable in the real world for the long haul.

With that, here’s more AI news.

Sharon Goldman
[email protected]
@sharongoldman

This story was originally featured on Fortune.com

Read full story at source