IT Brief UK - Technology news for CIOs & IT decision-makers
Ai software diagram vs frustrated it engineers scaling production

AI-built prototypes leave firms struggling to scale

Fri, 6th Mar 2026

I-Finity has reported a rise in approaches from organisations that built software prototypes with AI tools but then struggled to make the applications suitable for live use.

The York-based web and software developer said it is increasingly being asked to fix AI-first applications after teams run into problems during deployment and later expansion. The trend reflects a broader shift in how software is written, as more developers use generative tools in day-to-day work.

AI-generated code is common in early-stage builds because it can quickly produce working demonstrations. In commercial settings, those early builds face different demands once real users arrive, data volumes grow, and systems connect to other services.

Many of the applications I-Finity sees at this stage show weaknesses that were not obvious in prototype form, including bugs, performance issues, limited scalability, and difficulty meeting regulatory and policy requirements.

Russ Huntington, I-Finity's chief technology officer, said the rise in remediation requests reflects the gap between producing a proof of concept and running production software.

"AI is brilliant for exploring ideas quickly, but we're seeing a growing number of businesses discovering that their AI-built apps simply aren't designed to scale or meet compliance requirements," Huntington said.

Prototype versus production

These comments come as businesses experiment with AI tools that generate code from prompts, specifications and partial examples. Teams often use the tools to build internal products, customer-facing apps and platform features at speed.

In many organisations, AI-first development also changes who can produce working software. Non-specialists can assemble basic applications and automate tasks that once required a dedicated development team, reducing initial effort but creating uncertainty about code quality and long-term maintainability.

The difference between a prototype and a production service often becomes clearest when a business tries to scale. Scaling brings new requirements for reliability, monitoring and support. It also raises questions about data handling, user permissions and the security of connected systems.

Huntington said some of the most serious problems arise when governance is postponed. He pointed to privacy and data storage as areas that can create wider exposure once an application moves beyond an experiment.

"Vibe coding helps lower the barrier to entry, reduce upfront costs, and allows teams to explore ideas without committing to a full development cycle. But when those early builds are pushed further than they were designed for they often fail. "And if issues around GDPR and secure data storage aren't addressed early on, a quick-fix AI app can quickly become a significant risk," Huntington said.

Review discipline

Industry research has raised related concerns about how consistently developers check AI-generated output. A survey by Sonar found that while 72% of developers use AI tools daily, fewer than half consistently review the code those tools produce.

Other published figures suggest AI coding tools are now routine. Research cited in the release said 84% of developers use them in their workflows, with the tools responsible for around 41% of code written in production environments. The same source said nearly half of developers report quality issues or incorrect outputs from AI coding tools.

Academic work has also examined security and logic errors in generated code. Independent research from Cornell University found that AI-generated code can contain measurable security vulnerabilities and logic issues, and that quality varies between tools.

These findings point to a central challenge in AI-assisted development: the speed of code generation can outpace the discipline of testing, review and documentation. In traditional development, processes such as code review and automated testing sit between writing code and releasing it. Teams that treat AI output as finished software may compress or skip those steps.

"Vibe coding" trend

I-Finity linked the trend to the rise of "vibe coding", a term for workflows that rely heavily on AI tools to generate code. It said searches for the phrase have risen from near zero two years ago to more than 33,000 a month.

The practice is no longer confined to hobbyists and early adopters. Professionals use generative coding tools for boilerplate, debugging help, and drafting functions and tests. With adoption at scale, process gaps can affect commercial products, internal systems, and customer data.

I-Finity advised organisations to treat AI output as a starting point rather than a replacement for engineering work. It said businesses should apply standard development controls when moving beyond prototypes, including security checks, data governance and structured testing, as more teams try to take AI-built applications into live environments.