The launch of ChatGPT in late 2022 propelled generative AI to the top of the agenda for boardrooms and nation states alike. Hundreds of billions of dollars are pouring into the sector, businesses are frantically reappraising workflows, and world leaders now talk about frontier models as if discussing oil reserves. It is a moment of unprecedented momentum, fuelled by a belief that we have entered the next great technological era.
Yet, set against all this activity, there’s a chorus of scepticism. Some question whether the capabilities of generative AI have been overstated; there are concerns over safety, trust, and transparency; and companies and governments alike worry about the return on (massive) investment.
One company tangling with these debates is IBM. It’s a tech giant that’s not set on winning the chatbot race so much as winning the enterprise AI race, meaning its perspective is heavily informed by operational business realities.
That in mind, we put four key questions to the company’s UK and Ireland Chief Executive, Leon Butler…
One of the biggest debates in AI at the moment is whether AI is in a bubble. What is your take?
When I look at the current AI landscape, I understand why some people question if we’re entering a bubble. Investment has surged and valuations are high. But I don’t see this moment as a repeat of past tech bubbles. Instead, I believe it’s time to rethink what AI means for business.
We’re already watching AI generate value for enterprises—improving productivity, reducing costs, and transforming workflows in ways that simply weren’t possible a few years ago.
To bring this to life, AI and automation have helped IBM itself unlock extreme productivity gains across the company, reaching $4.5B in savings by the end of 2025. And it’s not just IBM seeing the benefits. Last year, the University Hospitals Coventry and Warwickshire NHS Trust introduced a new AI tool using IBM watsonx to manage HR-related queries. Now the Trust has estimated it saves over 2,000 working days per year across HR which is a significant achievement.
I believe real transformation comes when companies change how they work. Moving to a position where generative AI completes complex work, integrates seamlessly with many systems and reshapes how businesses operate.
That said, I do believe we need to be thoughtful about where and how the industry is investing. The rush to build large, power‑hungry data centres is creating an infrastructure race that may not translate into meaningful returns for many players.
We’re focused on a different path—one we believe is far more aligned with the needs of our clients. We know AI is not just about using the right tools. It’s about building the right hybrid architectures that can handle complex and data-intensive workloads, including agentic AI. On top of this, we see the future of AI in models that are purpose built for specific business challenges and easier for enterprises to adopt. Just as important, they create room for developers and entrepreneurs to innovate, building applications that will fuel the next generation of growth and jobs.
For us, this era is about practical innovation, strong partnerships, and delivering technology that truly moves business forward.
Data shows that AI is already impacting graduate hiring. How should we tackle this at a societal level?
When we look at how AI is beginning to reshape graduate hiring, I see both a challenge and a tremendous opportunity. We can all see that AI is reshaping how people work and learn and it’s clear some traditional entry‑level roles are changing as automation becomes more capable. We shouldn’t ignore the impact on students preparing to enter the workforce as now everyone needs to build new skills for the AI era. I firmly believe society can get ahead of this—not by slowing innovation, but by widening access to the skills and tools that will define the future of work.
First, we need to rethink how we prepare young people for their careers. That means shifting our education systems toward problem‑solving, creativity, data literacy, and real‑world adaptability—skills that AI enhances. It also means forging deeper partnerships between universities, employers, and technology providers so students graduate already fluent in the tools they’ll encounter on day one.
We also have to broaden pathways into high‑value jobs. A study by Ambrosetti highlighted over 43 million workers across the UK will need upskilling by 2030. What’s interesting is 11 million of these workers will rely on non-traditional educational pathways. Digital skills development and on-the-job learning are some of the best ways to equip the future workforce. Apprenticeships, online learning and certificate programs should stand alongside traditional degrees.
Then, at a societal level, we must ensure AI becomes a multiplier of opportunity, not a gatekeeper. That’s why I’m such a believer in open ecosystems—tools, models, and platforms that are accessible to students, startups, and educators. When more people can build with AI, more people can participate in the growth it creates.
Do AI’s benefits justify the environmental impact of training and running the models plus all the associated infrastructure?
When it comes to AI's environmental impact, I believe it's important the industry takes a responsible approach and is deliberate about how models are built and used.
Training and running ever-larger models can consume enormous amounts of energy, whereas many applications don’t actually require a broad level of general intelligence. We are seeing a growing shift toward smaller, fit-for-purpose models that highlight a more sustainable path, including IBM's own Granite family of small models.
Small models are cheaper to train, faster to deploy, and significantly more energy-efficient, while still delivering strong performance on targeted tasks. This shows that when AI is designed for efficiency, specificity, and real-world impact, rather than scale for scale’s sake, its societal and economic benefits can outweigh the environmental costs.
IBM is a full-stack operator, meaning we build both AI hardware and software to work together seamlessly. This gives us a unique perspective on how to reduce energy demands across the stack. For example, we’ve created an AI accelerator, IBM Spyre, which minimises interaction with the host CPU to run AI workloads at just 75 watts. When assessing a company's overall infrastructure footprint and energy sources, new infrastructure technologies can dramatically reduce energy consumption, making AI workloads more sustainable without compromising performance.
AI’s benefits can outweigh its environmental costs, but only if we build it differently. Then we can unlock the promise of AI without accepting unsustainable energy demands.
What will the major enterprise AI trends be as we head further into 2026?
As we look forward to 2026, I see enterprise AI entering a new phase—one defined less by headline‑grabbing experiments and more by practical, scalable value. And from IBM’s vantage point, several clear trends are emerging.
First, 2026 will be the year companies activate the “how”, using AI to differentiate and stand out from competitors. After years of experimentation, companies will need to be done with pilots and ready to move on to real AI transformation. The proof will come from how organisations can make AI deliver measurable results. Companies should look to operate AI agents at scale, running dozens or even hundreds of agents in production, built by multiple teams across multiple platforms and executing in diverse environments. The competitive edge will belong to organisations that treat AI not as a technology investment, but as a strategic discipline woven into every part of the enterprise.
Second, organisations will increasingly demand control over their AI. That means control over data, governance, and IP, as well as the flexibility to choose where workloads run—on‑premises, in the cloud, or at the edge. I believe digital sovereignty will also play a larger role as it gives organisations the ability to govern AI systems, data and infrastructure without relying on external entities. This view is backed by our recent IBV 2026 trends report. The report revealed 93% of executives believe factoring data sovereignty into their strategy is a must for this year.
Third, the era of generic AI infrastructure will come to an end in 2026. The trend of using identical servers as universal hammers for every AI nail is fading, because real business problems demand precision, not brute force. The future lies in specialised, co-created AI infrastructure: hardware and software designed together for specific use cases, not generic workloads. Think precision tools, not sledgehammers.
.png)
.jpg)