IT Brief UK - Technology news for CIOs & IT decision-makers
Engineers data scientists collaborating modern office ai network diagrams

AI development to focus on technical use as consumer rollout slows

Thu, 4th Dec 2025

AI development is expected to intensify in technical domains during 2026, but consumer-facing AI expansions could slow, as organisations reassess their integration strategies and infrastructure challenges remain unresolved.

Development focus

Organisations are forecast to invest primarily in AI for software development, where teams can manage risk more readily. AI agents are increasingly being used by developers to write and test code in tandem, shifting the role of developers toward managing and coaching these AI teams. This concentration on technical workflows stands in contrast to slower progress in customer-facing sectors such as banking, healthcare, and retail. In these areas, the risk of reputational harm or regulatory issues from even minor AI errors is seen as a major hurdle.

"Enterprises will continue expanding AI's role in the SDLC before bringing it to end-user products, with an emphasis on the need for AI-driven workflows that are reliable, consistent, and safe, before public release," said Rob Mason, Chief Technology Officer, Applause.

Protocol adoption

The Model Context Protocol (MCP) is set to play a key role in facilitating communication between AI agents, tools, and digital platforms. Mason expects MCP to become "the connective tissue for how AI agents interact with tools, applications, and the web," making optimising digital platforms for MCP integration as important as existing API development. This would require both human and AI agent access to company content to be equally seamless in future digital strategies.

Strategic use

The hurried integration of generic AI features into enterprise products, primarily for marketing purposes, appears set to slow. Companies are anticipated to prioritise practical, value-driven implementations instead of adding AI as a superficial enhancement. Organisations have observed that unnecessary AI additions can extend development times, complicate quality assurance, and compromise final product reliability.

"The next phase of AI maturity will prioritize meaningful, use-case-specific integration, deploying AI only where it adds measurable value or unlocks new capabilities," said Mason.

Infrastructure challenges

While the pace of conceptualising and coding new AI solutions accelerates, outdated systems within large businesses continue to inhibit the realisation of these projects. Many AI initiatives fail to progress beyond the idea stage or struggle in production environments that do not support rapid deployment or rigorous quality assurance. Even leading brands reportedly face these constraints.

"These dated processes are hampering production and impacting quality - even the biggest brands have been affected," said Adonis Celestine, Senior Director and Automation Practise Lead, Applause.

Companies are expected to respond by increasing investment in modern cloud-based infrastructure throughout 2026. This upgrade is projected to shorten development cycles, allowing finished AI applications to be deployed in hours instead of weeks. Test and validation practices will also require an overhaul to suit this new timeline.

Security priority

As adoption of large language models (LLMs) expands, security issues are coming to the fore. Applause expects that organisations will have to ensure robust oversight of all LLMs deployed internally, especially given the sector-specific applications in departments such as finance, marketing, and HR. Comprehensive visibility and vetting practices, potentially including adversarial testing methods, are seen as essential to maintaining security levels and minimising the risk of vulnerabilities in generative AI tools.

"An organisation's AI apps, agents and services - some of which are handling extremely sensitive, high-stakes data - are only as secure and reliable as their underlying LLMs. But, how do you assess the resilience, reliability and scalability of an LLM or multiple LLMs?" said Celestine.

Benchmarking and validation, led by domain experts and dedicated QA teams, are likely to become routine, aiming to spotlight performance gaps and potential weaknesses in AI models used across varied organisational applications.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X