Welcome to our weekly Three Big Ideas roundup, in which we serve up a curated selection of ideas (and our takes on them) in entrepreneurship, innovation, science and technology, handpicked by the team.
⏪ Eamonn Ives, Research Director
Passing laws in Britain is usually pretty easy. Ensuring they work properly is the challenging part. We have a tendency in this country to legislate with the noblest of intentions, but also with little clarity on what success should look like. What we* might term ‘post-legislative scrutiny’ often goes missing, which creates a host of issues later down the line. (*Or more accurately in this instance, Lord Goodman, whose recent posts about the Football Governance Bill inspired this blog.)
One way to improve things would be to treat major legislation more like a scientific experiment. That would mean defining, upfront, a set of testable hypotheses: measurable goals the law is supposed to achieve, and a timeframe for achieving them. We might also want to establish a formal, independent process to judge whether those goals have been met. This task could be allotted to relevant existing regulators, or to an ad hoc Parliamentary committee scheduled to be formed however many years in the future.
This would create a credible ‘off-ramp’ for bad policies. Politicians wouldn’t have to admit personal failure if their pet policy backfires. They could point to the independent verdict and move on. It would depoliticise course correction and normalise cleaning up mistakes, rather than entrenching them out of pride or fear of bad headlines.
Of course, no system would be perfect. Measuring policy impact is messy, and attribution is often contested. But simply having to define clear, testable objectives at the outset would itself be a step forward. It would force better policymaking by making aims concrete, and by keeping open the possibility that some ideas just won’t work.
If we really believe in evidence-based policymaking, maybe it’s time we started legislating with hypotheses we’re willing to test, and willing to abandon if they fail.
🐕 Philip Salter, Founder
It’s a dog’s life. At least it is, according to Trevor Klee, an entrepreneur who makes drugs for animals. He is often asked if there is a Food and Drug Administration (FDA) for animal drugs. There is – and, as he explains in Works in Progress, both human and animal regulators could learn from each other.
Back in 1983, the Orphan Drug Act was introduced to address the lack of treatments for rare human diseases – those affecting fewer than 200,000 Americans – by offering regulatory fast tracks, tax credits, and extended market exclusivity. These diseases had been largely neglected after the FDA tightened drug approval standards in the 1960s, making development too costly and risky for low-revenue conditions.
Seeing similar challenges in veterinary medicine, the animal health industry pushed in the 1990s to create an equivalent framework for neglected species and conditions. In 2004, the Minor Uses and Minor Species (MUMS) Act was introduced, going further – regulators were willing to take more risks with animals’ health, particularly for uncommon species and conditions.
In 2018, the FDA expanded the MUMS Act to include major uses in major species, if trials would be prohibitively expensive. This brought major diseases in cats and dogs into scope. Companies like Loyal aim to give dogs longer, healthier lives – and eventually help humans too.
Klee envisions a new FDA pathway for human drugs that lowers costs and complexity by allowing approval based on a reasonable link between biomarkers and outcomes, with follow-up trials within five years. This could reduce investor risk, encourage innovation in expensive fields like Alzheimer’s, reveal hidden disease variation, and make treatments more affordable.
The trade-off? Less certainty. Patients might pay out of pocket for unproven treatments and rely more on doctors’ judgement. But with strong post-market oversight, Klee thinks the benefits – access, affordability, and innovation – would outweigh the risks.
🛰️ Jessie May Green, Events and APPG for Entrepreneurship Coordinator
According to the IPCC, global greenhouse gas emissions need to peak this year in order to keep global warming to 1.5°C – or even 2°C – above pre-industrial levels. But how do we know if we’re on track for that? Thanks to Climate TRACE, data on drivers of climate change will now be available in close to real time for the very first time.
Co-founded by former US Vice President Al Gore, Climate TRACE has mobilised the tech community to unite diverse sources of climate data. Using a mixture of satellite imagery, infrared imagery, artificial intelligence, and reliable on-the-ground sources, they can track overall global greenhouse gas emissions, but also pinpoint local hotspots such as landfills and factories, and even moving sources like planes and ships.
Described by Forbes as a planetary MRI, Climate TRACE will report this climate data once per month, and with only a 60-day time lag. Promisingly, the data just published for February 2025 show that emissions have fallen on the year prior. However, the decrease was marginal, and only time will tell if this will represent a real peak. Still, this early data is imperative.
As the saying goes, you can only manage what you can measure. Ensuring a successful transition to net zero requires us to know exactly where to focus decarbonisation efforts, and this tool gives us that. Hopefully, it will also serve to create a culture of transparency and accountability that inspires collective action. Take a look around the Climate TRACE interactive map – there is much to explore.