Network Intelligence
Paul Patras on how AI is transforming mobile networks to prevent communications blackouts and optimise energy consumption

We rarely think about mobile networks – until they stop working. Last week’s Vodafone outage was a timely reminder, and there’s good reason to worry that failures could become more commonplace. Underneath every call, video stream or GPS query are vast, energy-hungry systems that run around the clock and are subject to increasing strain.
Mobile networks have never been perfectly optimised. Operators overprovision just to stay ahead of demand by running far more equipment than is typically needed to guarantee reliability. But having infrastructure constantly switched on even when it is barely being used comes at a huge energy cost. As energy prices have risen, it’s a model that has become steadily unsustainable, especially as margins have tightened and expectations for service quality have grown.
We’re surrounded by systems that never sleep, but what if AI could help them rest? Paul Patras, founder of Net AI, has spent the past few years building forecasting tools that let telecom providers safely power down infrastructure when it isn’t required without sacrificing service quality. What began as academic research at the University of Edinburgh has evolved into tools that help operators save energy while maintaining performance, addressing what’s become a critical challenge for an industry that underpins vast swathes of the economy.
As the company further undertakes projects where safety and security are paramount, new challenges emerge: how do you build trust in systems that must not fail? Through an ARIA-funded project, Net AI is now tackling those challenges too, using synthetic data to overcome information sharing constraints and building safeguards that make AI adoption viable in tightly-regulated, safety-critical environments.
What we discussed
How Net AI evolved from academic research to addressing real industry pain points.
Why energy efficiency is critical for mobile network operators now.
The data sharing barrier and Net AI’s synthetic data approach.
Building scalable AI models for telecommunications infrastructure.
The path from research to commercialisation in a highly regulated sector.
What government support looks like in practice and where gaps remain.
Lessons for scaling deep tech telecommunications companies
For policymakers:
Understand the landscape before designing interventions. Invest in spending time with companies on the ground to understand the real challenges they face, which will help inform how to shape funding programmes and identify policy gaps.
Match funding mechanisms to company realities. Innovate UK’s quarterly, in-arrears payments – which can be delayed by up to six weeks – can create cash-flow crises for startups. By way of contrast, the European Commission pays 80% of grant funding upfront, a model the UK should emulate.
Build continuity into early-stage support programmes. ICURe provides valuable upskilling for researchers transitioning to founders, but there’s often a gap before companies can access follow-on schemes.
Learn from what works in grant-making – fast decisions, light paperwork, flexible terms. ARIA’s quick turnaround, minimal paperwork requirements, and payment flexibility show what works for founders.
Allow flexibility for distributed teams. Terms for grants that restrict paying team members based abroad haven’t caught up with the post-pandemic workforce reality.
Right-size evaluation criteria. Scoring down an early-stage company for lacking an HR function misunderstands what matters at different stages of growth.
Invest in secure testing environments for critical infrastructure AI. Create sandboxes where companies can validate AI solutions on representative data without operators having to share proprietary information. This de-risks adoption for both sides.
Invest in tooling and support for public compute infrastructure. Public compute facilities offer competitive pricing, but they lack the tools, developer experience and customer support that make commercial clouds attractive to startups.
Think beyond research compute. The AI Research Resource conversation often focuses on training models, but deployment infrastructure matters too, especially high-security environments for sensitive applications.
For founders:
Focus on painkillers, not vitamins. The industry will talk enthusiastically about future possibilities, but they’ll only pay for solutions to immediate problems. Find where the pressure is greatest.
Build for portability from day one. Avoid platform-specific tools even when they’re convenient. The ability to quickly migrate infrastructure represents strategic flexibility.
Address data sharing barriers early. If customers won’t share data, build alternatives – such as local training, synthetic data generation, federated learning approaches that work with their constraints.
Do due diligence on investors. They’ll scrutinise you, so scrutinise them back.
Full interview
I. From vitamins to painkillers
Net AI sits at an interesting intersection – applying AI to telecommunications infrastructure. Can you explain what you’re building and who you’re serving?
Net AI is a network intelligence company that emerged from research we were doing at the University of Edinburgh around 2016-2017. We develop software using proprietary AI to provide real-time and predictive insights into mobile network usage and performance. Our goal is to help mobile network operators reduce their energy consumption in some cases by up to 60% and improve their service quality.
Think of it like a motorway with four lanes per direction. If you can understand you don’t need all four lanes but only need two, you can switch two off. Service is still there, but you’ve reduced maintenance costs and extended the infrastructure’s life. That’s essentially what we do – help operators understand demand patterns so they can dynamically manage resources without degrading service.
The customer base is primarily mobile network operators, though we’re increasingly looking at adjacent markets like fixed networks, cloud data centres and potentially smart grid providers.
How did you arrive at this as your focus?
When we started, 5G was just taking off, and there was a lot of excitement about network slicing – the idea of partitioning physical infrastructure into logical networks for specific applications. We built an AI tool that could understand and quantify traffic demand in real time, which would feed into this slicing philosophy and help avoid over-provisioning resources.
We filed a patent, engaged with the market and realised the industry wasn’t ready. People were talking about slicing, but they didn’t have the tools to realise it, and more importantly, they had no clear understanding of how to monetise this new architecture.
We came to the conclusion that we were selling vitamins to people who were really looking for painkillers. There were other problems that were much more pressing.
What were those more pressing problems?
Two big themes emerged. First, energy consumption. People are consuming more and more data – mobile data traffic grew by around 80 times between 2007 and 2023 – which means more and more infrastructure needs to be deployed, which means the cost of running everything is going up. While infrastructure has become significantly more efficient, electricity consumption in the ICT use phase still increased in absolute terms by around 40%. Combine that with the sharp rise in electricity prices, which have increased by 275% for non-domestic consumers even after adjusting for inflation, and the cost of running mobile networks has become a growing concern for operators.
Second, customer retention. Customers are quite fickle – you can take your number to a different provider easily. So retention is becoming a problem, and one of the root causes is service quality.
We realised we could build tools that help solve both problems. We built a forecasting engine that predicts demand, which you can use to decide when to switch on and off parts of the radio infrastructure, which consumes most of the energy in a deployment. We recently demonstrated that we are able to cut energy consumption from a deployment of over 200 cells by 35%. If you extrapolate, for a medium-large mobile network operator this could mean annual savings of around £40 million. You can also use forecasting to understand how network quality will evolve and whether there will be problems before they become incidents.
What do you think is the biggest misunderstanding about AI in telecommunications?
The machine learning and networking communities have been quite disconnected for a long time, which means there’s reluctance as telecom practitioners simply don’t understand what AI does and what it can do for them.
There’s historical context too. Just over a decade ago, some vendors had this idea of self-organised networks, and they were selling products that turned out not to be very useful – engineers could have done the job manually much better. So there’s reluctance as a result of that.
There’s also a perception that AI will consume a lot of energy, so you’re just moving the energy consumption from one location to another. That can be true, but the way we build models is quite scalable – we don’t train a model for every cell, we train a model for deployments of hundreds to thousands of cells. So it’s inherently low power.
II. Why energy efficiency matters now
Is there anything that’s making your work particularly critical right now?
There are several interrelated factors. We’ve seen a rise in energy prices, partly due to the conflict in Eastern Europe. That’s one aspect you can’t really control.
The other is that demand for high-speed connectivity is growing quite fast. People now have more than two devices on average. Everyone has a phone, but they might also have a smartwatch. Their homes will be connected with multiple devices, and you have new applications emerging – VR headsets, cloud gaming – which all require more bandwidth. Global mobile data traffic grew from 106 exabytes per month in 2023 to a projected 123 EB/month in 2024, with forecasts suggesting it will more than double to 280 EB/month by 2030. That’s a compound annual growth rate of 15%.
To meet that demand, you need to deploy more infrastructure, and that infrastructure consumes more energy. There was a McKinsey report recently showing that energy expenditure is now outpacing sales growth by 50% for operators.
That’s a big red flag. Operators need to find intelligent ways of managing infrastructure, otherwise costs will likely have to increase.
Can you already see signs of this?
The merger between Three and Vodafone is a good example. Three was no longer profitable, and Vodafone saw this as an opportunity to grow their customer base by absorbing a competitor. With that came infrastructure, so they would get to expand their deployment in bulk with a one-off cost rather than going through planning permissions, getting licences – all of that.
We’re also seeing large telecom groups selling off parts of their business. Vodafone Spain wasn’t profitable for years, so they sold it. You start to see these large transactions happening because operators struggle to keep what is, in my opinion, critical infrastructure profitable.
Maybe I’m being overdramatic, but we might get to a point where governments will have to step in and provide financial support so operators can continue providing connectivity to the population. Or you could see hyperscalers moving in and taking over, though I don’t think that will happen easily because telecoms is such a highly-regulated environment.
Mobile network operators pay annual spectrum licence fees of around £0.7-1 million per megahertz, reflecting how scarce and valuable radio spectrum is. There are also obligations to provide coverage to a high percentage of the landmass. Everything from how much power an antenna can emit to the interoperability of network components is governed by a web of regulations, including health and safety limits, spectral reuse constraints and technical specifications defined by international standards. There’s potential for simplification through Open RAN, an alternative approach that aims to disaggregate network hardware and software that lets operators mix and match components more easily, but adoption has been slow.
So the opportunity lies in managing underutilised resources more intelligently.
Exactly. Many of the resources currently in use to meet connectivity demand are often underutilised. If you can understand how demand evolves on a daily, continual basis, and have the right levers to control how many resources you have active all the time, it allows you to save energy. In the data we’ve analysed, we’ve seen a mean peak-to-average throughput ratio of 6.21 — which means that infrastructure is often dimensioned for peak demand that’s more than six times higher than the average. That level of variation creates a significant opportunity for smarter, demand-aware management.
III. Building models where errors aren’t equal
That’s where your focus on forecasting comes in, but this isn’t just standard demand forecasting. What makes predicting network demand different from other forecasting problems?
Networks are peculiar in that when you do forecasting, not all mistakes are created equal. If you’re estimating demand – let’s say you want to switch off infrastructure to save energy – and you underestimate, you switch off too much. That means you cannot serve customer demand, service quality degrades, and you lose on both ends: you save on energy but your service is poor, so you either pay penalties because you have service level agreements with enterprise customers, or your customers become frustrated and move elsewhere.
We’ve built a unique methodology to train forecasting models so they don’t make these types of underestimation mistakes. That’s what allows you to save energy without impacting service quality. The models are trained to be asymmetrically cautious – it’s okay to slightly overestimate, but critical to avoid underestimation.
What did you need that off-the-shelf models couldn’t give you?
Some people take an off-the-shelf forecasting model and say: “I’m going to train a model at the level of every cellular antenna.” If you have a deployment with tens of thousands of antennas, you need to maintain tens of thousands of models. Effectively, you’re trying to solve an energy efficiency problem at the edge, but now you’re moving the same problem to the cloud because you have to do so much intensive computation to serve your edge infrastructure.
Instead, we have models that are highly scalable. We can train a single model for hundreds to thousands of cells, which means orders of magnitude fewer parameters to train and inherently lower power consumption. That requires careful design, proper training methodology, and data engineering to capture sufficient representativity and dynamicity.
IV. Solving the data sharing problem
Data sharing seems to be a persistent challenge in this space. How are you addressing it?
Data sharing is a real barrier to adoption, especially in telecoms. Although we don’t need user data – we just need network-level data which doesn’t carry privacy-sensitive details – we recognise it does carry commercially sensitive details. If I’m operator X, I wouldn’t want operator Y to know how I deploy my network or configure things.
Because of that, operators are quite reluctant to share data even with confidentiality agreements in place. This became quite frustrating, so we asked: “How can we overcome this problem?”
We’re now building a generative AI tool – not an LLM, but traditional generative AI – that allows us to generate synthetic data that looks like what happens in a real deployment.
How does this work in practice?
If you have a problem and want us to solve it – or want to test solutions from multiple vendors – but don’t want to share data, we come to you and give you a box. You train the box with your data locally. Once it’s trained, you give us the box back, but it doesn’t come with your data – just the model weights.
Now we can use that box to generate synthetic data and create a behavioural digital twin of your deployment. Then we can train an AI model that solves your problem without ever accessing your actual network data.
This approach addresses both the data sharing barrier and allows for safer testing of different AI-based solutions.
V. Compute infrastructure
The United Kingdom is investing in domestic data centre capacity. How much does it matter for a company like yours?
From a sovereignty perspective, it makes sense to have resources closer to your doorstep, ensuring regulatory compliance specific to a country. There was a recent investigation by the European Commission where Microsoft was asked if they could guarantee that data handled in Europe doesn’t end up in the US, and they couldn’t.
If you’re dealing with sensitive deployments – like BT running services for GCHQ or other government agencies – you’ll have strict requirements. You don’t want to rely on third parties that might not meet certain compliance regulations.
I think the current geopolitical climate emphasises that you don’t want to be overly reliant on entities that might become adversarial states. The UK is now investing heavily in satellites, which makes sense given the reliance on Starlink and the somewhat erratic management there.
There’s a distinction between AI research compute and production compute. Do you think policymakers understand that difference?
I think the conversation often focuses on research computing rather than thinking about compute as critical infrastructure for deployment and adoption of AI systems.
We need to think about different types of compute infrastructure – not just for training models, but for running AI services at scale, including high-security environments where you might be processing confidential or sensitive data.
In telecoms, people often speak about ‘five nines availability’ – whereby systems function 99.999% of the time. To meet that, high-performance compute infrastructure needs to be highly resilient to power outages and cyber attacks, and have enough capacity elasticity to meet varying demands. If you think about in-depth real-time insights, these may be a result of analysing millions of events that happen in a network over a few minutes, which needs scalable stream processing pipelines. In terms of handling sensitive data, that typically involves setting up data safe havens to which access control is tight, often meaning physical access only, access over dedicated secure connections, or with multiple layers of authentication put in place end-to-end.
VI. Navigating the funding landscape
How has your experience been with different grant funding schemes?
It’s been quite varied. At the very early stage, Scottish Enterprise had the High Growth Spinout Programme, which provided funding for market discovery and validation. That was quite good – it bought me time outside my academic duties to focus on business development and learn new skills.
The challenge was there wasn’t continuity. After that validation phase, it was essentially: “Now go figure it out.” There was probably a two-year gap before we could access other schemes like Smart Scotland.
We also had funding through ICURe before we launched, and I thought that scheme was quite nice, especially for upskilling. There was a lot I learned that I didn’t know previously as a scientist.
But I had a disappointing experience with one particular call recently. It was for sovereign AI projects – three-month projects – and they specified applications should be between £50,000 and £120,000. We applied for £48,920, deliberately asking for slightly less than the minimum because we wanted to demonstrate good value for money by using cloud credits we could secure at no cost to the project. The proposal was desk-rejected because it was below £50,000. Nobody even looked at the content – it was a pure tick-box exercise. That seems particularly shortsighted for a sovereign AI initiative.
More broadly, the amount of paperwork for a three-month project was identical to what you’d need for a two-year project. There’s a lot of workload involved for relatively modest funding.
You’re also working with ARIA on a project around synthetic data for network energy efficiency. How did that experience compare?
The experience we had with ARIA was much better. The turnaround time was super short – you write a concise proposal, submit, and within three weeks you know if you’re getting an interview or if it’s not suitable. The interview was very sensible. People asked good questions, it was a proper discussion, and we were offered a research contract. We could even agree on payment terms. There are technical reports to submit periodically, but I think the overhead is quite manageable. It’s a good instrument.
What about Horizon Europe funding?
We have two Horizon Europe projects ongoing at the moment. The difference is we don’t get funding directly from the European Commission – we get it from Innovate UK, which underwrites what we would have been entitled to before Brexit. Unlike the European Commission, which pays 80% upfront, Innovate UK pays quarterly in arrears. Sometimes there are delays because inexperienced staff query claims, and you end up going back and forth for four to six weeks. That can create cashflow problems because you need the money ahead of doing the work to pay people.
Are there any other issues with how grant funding works in practice?
First, assessment criteria should be appropriate for the stage of the company in question. We were once scored down in an early-stage competition because we didn’t have an HR function. We still don’t have one – we outsource it. For a small team, that makes sense, but the evaluators seemed to expect practices appropriate for a much more mature company.
Second, there needs to be more flexibility around distributed teams and international talent. The world has changed, and funding mechanisms need to reflect that reality. We were born during the pandemic, so inherently we have a distributed workforce. We have one person in France, two in Spain, and one developer in the US who’s a dual national. Those people become contractors, and some grants no longer allow you to pay contractors outside the UK. The pandemic changed the workforce landscape. If you want to tap into the best talent, you need to be creative.
VII. What policymakers need to understand better
If you were advising the Secretary of State for Science, Innovation and Technology right now, what would be your priority areas to address?
First, understanding who’s doing what in the ecosystem. There’s often a disconnect between decision-makers and people who develop things. Agencies sit in between, responding to funding seekers and reporting back to programme runners. If you can spend time meeting companies, understanding challenges they face, that helps inform how to shape funding programmes and identify gaps.
Second, funding is crucial. I know budgets are tight, but we need the ability to unlock funding in the right places.
Third, talent. With how the US is right now – visa costs have increased dramatically, employers need to pay hefty amounts – there’s an opportunity for the UK. We can be clever about attracting talent. While it goes against some of the rhetoric around immigration, you want to be honest and look in the mirror: we’re going to get some of the most talented people who will work, get paid proper salaries, and pay tax. It’s a win-win. You promote technological advancement locally and have highly-skilled, highly-paid individuals paying tax. To me, it’s a no-brainer.
What would you want regulators to understand better?
The telecom space is already highly regulated. Whether there should be domain-specific AI regulation is another question.
One approach could be creating a compliance framework with neutral testing spaces where solutions can be validated before deployment. That gives end-users confidence that the solution has undergone impartial validation and verification.
This is what the Telecom Infra Project and Digital Catapult are seeking to do to some extent. There are also smaller initiatives picking up now, like Lab of the North hosted by the University of York, similar to what the Berlin government developed with Deutsche Telekom for their i14y lab.
Wouldn’t that create an additional burden for AI developers?
It will for sure. Startups don’t typically like to hear that. But if that’s what it takes to demonstrate commercial viability and give confidence to operators who are managing critical infrastructure, I think it’s a small price to pay.
What’s your advice to founders trying to build companies in the AI and telecommunications space?
Be prepared to fail. Don’t get fixated on the idea – you might have unique know-how, but you may need to pivot once you understand your customers. Don’t build something you think is great; build something there’s actually a need for.
If you’re looking to raise, do your due diligence on investors just as they’ll do on you. You want people who provide not just capital but also introductions, understanding of where you have gaps, and how to fill those gaps – “soft money” alongside financial investment.
We ask all our guests the same closing question: what’s one interesting thing you’ve read or listened to recently that you’d like to share with our readers?
I’m an amateur triathlete and I found Alex Hutchinson’s Endure: Mind, Body, and the Curiously Elastic Limits of Human Performance to be a fascinating read about what drives human body endurance and the role the brain plays in that.
Second, We’ve done it before: how not to lose hope in the fight against ecological disaster, an episode from The Guardian’s Long Read podcast, which is adapted from the book Human Nature by Kate Marvel, goes through a number of achievements in terms of policy and technology we’ve made as a species to survive. With a climate crisis unfolding and the news being overtaken by bleak events, I found this podcast quite uplifting.





