Total Recall
Hanna Celina on how combining learning science with generative AI can overcome the cognitive forgetting curve to optimise human memory

The fundamental mechanics of how humans acquire and retain knowledge remained largely unchanged for centuries: we read, we listen, we memorise and inevitably forget at least some of that information. The digital revolution may have democratised access to knowledge, but it did little to optimise the biological process of learning itself. We moved from libraries to phones, yet the ‘cognitive forgetting curve’ remained as steep as ever.
Hanna Celina, Co-Founder of Kinnu, is working on a second revolution, one that moves beyond the mere distribution of content to the optimisation of memory.
By bringing together the rigour of learning science with the generative power of AI, Kinnu is building what they call a ‘learning engine’. The goal is to help anyone to learn anything, faster and more effectively than ever before.
Kinnu has two AI-powered microlearning apps designed to help users build lasting knowledge through bite-sized, interactive, and gamified content. One is for general knowledge, and one is for high-stakes exams — the Solicitors Qualifying Examination (SQE) and Chartered Financial Analyst (CFA).
But building a frontier EdTech company in the United Kingdom has its challenges. From a copyright regime that Hanna describes as “wilfully shooting itself in the foot” to the displacement of private innovation by heavily-funded but low-quality government outputs, the path to scaling infrastructure is fraught with policy barriers.
I caught up with Hanna to discuss how Kinnu is using AI to tackle high-stakes exams such as the SQE and CFA, why the UK needs to radically rethink its data-sharing frameworks, and why the most important trait in a startup hire isn’t a specific skill set, but general intelligence.
What we discussed
How Kinnu uses A/B testing to validate learning interventions at scale.
The human-in-the-loop requirement for zero-error tolerance in legal and financial education.
How vibe coding is turning writers into product builders.
The structural barriers of Britain’s copyright law and the lack of accessible public data for education.
Why R&D tax credits are better than government grants for agile startups.
Lessons for scaling AI-driven learning companies
For policymakers:
Expand fair use provisions for copyrighted material in AI model training. The UK’s current framework puts domestic AI development at a disadvantage to jurisdictions with broader training exemptions. At a minimum, a research and non-commercial training exemption is needed to maintain competitiveness with more permissive regimes.
Open up data for public good. Government-set exam banks (such as GCSEs and A Levels) and university curricula should be made available to train large language models. Currently, vital educational standards are locked in poorly formatted Word documents, preventing the creation of a ‘National Knowledge Graph’.
Avoid market-distorting interventions. Policymakers should be wary of funding government agencies to build tools that private startups can deliver more efficiently and cheaply. Government’s role should be to remove hurdles for innovators, not to crowd them out while also offering a poor-quality service.
Prioritise R&D tax credits over grants. Grants often require startups to predict their trajectory years in advance, which is an impossibility for agile companies. Outcome-based tax credits allow companies to pivot as the technology evolves, while maintaining documentation discipline.
For founders:
Hire for general intelligence, not specific skills. In an AI-accelerated world, specific skills depreciate quickly. Founders should look for smart generalists who have a bias to action, and who are genuinely interested in what you’re building. Those kinds of people can adapt to roles as varied as marketing to vibe coding.
Be stingy with time. Constantly seek the simplest, fastest, most stripped-down way to validate a concept.
Zero-error tolerance requires human expertise. AI has been central to Kinnu, but in highly regulated or high-stakes fields such as law or finance, accuracy below 100% is a failure. That means that while it is important to use AI to generate and prototype, reviewer agreement between multiple human experts is crucial to ensure total accuracy.
Full interview
I. Building the learning engine
What is Kinnu building, and why does it matter?
We’re building a learning engine to help anyone learn whatever they want to. That’s the big mission. I think that it really should be for everyone, no matter their age, and for any kind of information that anyone wants to learn.
We officially recommend Kinnu to users from 12 years of age, because that is roughly the writing level we use in our general knowledge app. But we’ve heard that parents use it for homeschooling with kids younger than that. I personally taught my six-year-old daughter how to read in the Kinnu app because she just loved reading the answers when I was doing my spaced repetition sessions.
The interesting thing is the motivation to learn. We have two apps: our consumer app called Kinnu General Knowledge, and then a specialised exam prep app for the Solicitors Qualifying Examination (SQE) and Chartered Financial Analyst (CFA). One is learning for the sake of learning — which is intrinsic — and one is exam prep, which is extrinsic. Working with these two different audiences really helps us progress our general mission.
What is it about the technology that makes learning better compared to traditional methods or other apps?
There are two approaches. One is the content. A lot of educational content is poor quality because it has what we call ‘fluff’ — it follows the interests of whoever is writing it rather than the most critical things a person should learn.
The second, more interesting aspect is that a lot of learning just doesn’t follow learning science. Kinnu’s biggest contribution is that we make learning faster and more effective. Many people have self-limiting beliefs about how much they can learn because they see others learning faster. What we realised is if you give people the power tools to learn — one of the most powerful of which is perhaps spaced repetition — they can remember what they learn and build on strong foundations.
II. Validating the science
How do you measure learning improvement? How do you understand whether this is actually an improved approach?
When we started, we built an app and got it to about a million downloads. We knew we were using tools that the literature had shown to work, but we wanted to test this in a gold standard learning environment. We recruited 10,000 of our users into Kinnu Labs, a virtual learning lab to test learning features benchmarked against human tutors. We ran a series of 40 different experiments and learning interventions. This user research helped us to understand how much participants had learnt. We then used those findings to rebuild our already well-designed app into the next step.
We realised that for intrinsic learners, our retention was decent but we wanted to build a best-in-class company. The best-in-class is currently Duolingo. We found the key was in the tension between effective learning and daily use. There are serious trade-offs here. Duolingo trades off some learning efficacy for gamification components to keep people coming back every day.
We took a different approach: can we find a group of people who really want to learn and will do it every day, so we can focus 100% on the efficiency of learning? This led us to the exam prep niche. Users of our SQE and CFA apps spend north of an hour per day in the app. This gives us fantastic, fertile ground to experiment with how to make learning even more effective.
III. AI and the nuance of expertise
Applying this to professional exams like the CFA or SQE sounds uniquely difficult. What makes it harder than general trivia?
They are uniquely difficult on so many levels. First, we use LLMs to generate content, but we use humans in many loops of development to review it. That’s much easier for general knowledge because LLMs have had more training on it. With law and finance, it’s all about nuance. Especially the SQE; common law is very highly specialised knowledge and that makes it difficult for LLMs.
As these people are preparing for exams that will change their lives and earning potential, the error tolerance is zero. We don’t stop until there is not a single error or unclear thing in our questions.
I’ve seen some forum posts where people suggested the accuracy wasn’t always 100%. Was having human experts in the development pipeline always non-negotiable for you?
Yes. We ran so many cycles of LLM review, and human experts were still like: Nope, nope, nope. To get to 100% correct on questions, you have to work with humans.
And, about those forums: interestingly, at least half the emails we get saying something is incorrect are mistaken — it’s the user’s understanding that is wrong. So we’re providing extra tutoring by explaining why their understanding is flawed. But even human experts miss errors, and textbook writers do it routinely. That’s why we don’t just do one human per piece of content; we double it up so there are at least two different humans reviewing, and we do reviewer agreement to ensure they agree and discuss if they don’t.
IV. Data access and copyright reform
Is there any data you don’t have access to that would help Kinnu learn better?
We don’t have access to anything. This is why we use general LLMs. If you look at British copyright laws, you realise you really cannot do anything. It’s insane how high the level of protection is for copyright holders compared to the US, where companies have the freedom to innovate faster.
It goes deeper. We could be making strides in supporting A Levels or IB [International Baccalaureate], but the exam scores and question banks are just not public, even though they are often government-set exams. And if you look at learning standards — what a learner should know for a GCSE — it’s a bunch of poorly formatted Word docs that are completely not LLM-friendly. It was impossible to build a consistent ‘National Knowledge Graph’ from this data the last time we looked at it, which would be like a digital map where every concept is linked to its prerequisites and its real-world applications. That would enable a learner to zoom in and out of topics and understand how concepts relate to other disciplines. But, with recent AI improvements, turning that kind of data into a National Knowledge Graph might just be possible now.
What specifically would you like to see changed in British copyright law to help innovation?
Expand fair use of more data. The commercial interpretation of copyright law is, in my view, flawed and very ambiguous. I can read something, but an LLM cannot? I’m just a flawed LLM that forgets 90% of what I read.
The benefit of developing new technologies and being competitive globally is incredibly important. The US and China are pacing ahead. The government’s role should be to remove hurdles so innovators run or sprint, rather than creating more hurdles to jump through.
What are your impressions of Britain’s funding landscape for startups such as yours?
I spent two months drafting a 16-page document for an Innovate UK grant, and it was rejected on a technicality. But I realised R&D tax credits are much better. Grants expect me to know what I will be doing a year from now. But I’m a startup! I have no idea what I’m going to be doing in March 2027!
With R&D credits, we just do what we’re doing — pushing innovation — and then we tell the government: this is what we have done, please consider supporting us. It forces us to keep diligent documentation, which is a good discipline for the team. I’d recommend any startup look into R&D tax credits instead of grants.
V. Team dynamics and the role of a founder
Does the rise of vibe coding impact your hiring strategy? Do you need fewer people with specific technical skills?
We’ve always had an outlandish hiring philosophy: we don’t care about specific skills as much as general intelligence. We’ve found it’s easier to get someone passionate and intelligent to learn a specific skill than the other way around. So we’ve always had a team of generalists.
What vibe coding changed is that now people hired for their writing skills can actually build products too — things that automate content production or write the flows. The general intelligence plus the willingness to experiment is what we hire for.
What skill sets are most important for the leadership team and for a founder?
For a founder, the most important skill is a bias to action. Just get stuff done. Speed to learning is what we optimise for.
The second thing I learned from my CTO is to be very stingy with your time. There’s always an opportunity cost. Always ask: “What is the simplest, fastest way to get this done?” There’s never enough time to build, and everyone is building faster around you.
Then, there’s knowing people. Chris has amazing soft skills — he’s completely changed how I work. My skill is that I just love people and am interested in what they’re doing. That’s useful for a co-founder because you develop a network and end up knowing about the cool things first. That’s how we got to use ChatGPT before anyone else.
Another thing is focus and consistent, coherent execution. My co-founders and I are good at imagining the future, but when it comes time to execute, it’s all about saying “no” to protect focus and strategic compounding. Saying “no” is a founder superpower.
How has the recent explosion in AI capabilities, specifically tools like Claude Code and OpenAI’s Codex, changed your strategy or the way you build?
We were actually one of the first users of ChatGPT; we were in user testing before it was public. Our philosophy is: what can AI do, and where do we need to step in to fix errors?
We make all our content with AI. We do several rounds of review with different models and then human review. We use AI to prototype aggressively; we are just prototyping away and doing user research on prototypes rather than designing and building in that order. We’ve also built our internal tools through vibe coding, which has become ridiculously better recently. Each of us uses AI as a mix of personal coder, strategist, and marketing brainstormer.
The one thing we don’t use AI for actually is accounting. The automated accounting services often end up being more expensive than my current team.
VI. Conclusion and advice
If you were advising the Secretary of State for Science, Innovation and Technology, what should the top priorities be?
Just spend less money on things that bring no value. The government loves to talk about helping startups while making things that actually displace startups. I mean, just look at Oak National Academy, a heavily funded quango to help schools adopt AI. A startup would never raise £40 million for that; they’d raise £3 million.
And don’t get me started on the Government’s new AI Skills Hubs. It cost £4 million and I’m sure one query to ChatGPT could do better. As a citizen, I genuinely want to see the invoices for that — because looking at it, I just don’t see where the £4 million was spent.
My advice to government is this: get out of spaces where you don’t belong or have no expertise. You’re displacing people who can do it better and cheaper.
What is one interesting thing you’ve read or listened to recently that you’d like to share with our readers?
I recently read Citadel’s article about AI’s impact on the economy. It’s a grounding read. Many entrepreneurs don’t understand basic economics, falling into either ‘AI doomsayer’ or ‘UBI utopia’ camps. The article argues that AI is a technological development similar to the internet or steam engine; it displaces tasks, not jobs. Just as Microsoft Office didn’t eliminate office workers but empowered them to do more, AI will do the same. It’s a very reassuring, supply-and-demand focused narrative. For a more techie version of that same argument, I’d also recommend Marc Andreessen on Lenny’s Podcast.



