Three Big Ideas #16
Musk versus Mazzucato, the end of job polarisation, and the myth of the objective
Welcome to our weekly Three Big Ideas roundup, in which we serve up a curated selection of ideas (and our takes on them) in entrepreneurship, innovation, science and technology, handpicked by the team.
🧠 Eamonn Ives, Research Director
Is Elon Musk a genius? A glance at some of his social media activity suggests maybe not. A look at some of the companies he’s helped build suggests maybe yes. But who’s to decide? One person who reckons they have the answer is University College London’s Professor Mariana Mazzucato – and it’s firmly in the negative. Posting on Musk-owned X this week, she declared: “He invented NONE of the TECH. Just bought stuff at the right time. More marketing than science.”
There’s certainly a kernel or more of truth in this. Rockets existed long before SpaceX, satellites before Starlink, online banking before PayPal, large language models before xAI and digging holes in the ground before the Tunnel Boring Company.
Should all this debar Musk from genius status though? No. First of all, innovation is, always has been, and always will be, an iterative undertaking where new insights are built on previous ones. I’m sure Mazzucato can reel off a list of names of people she does think are geniuses, all of whom would happily agree their individual contributions were influenced by the work of someone else.
Second, much as I wish society would do more to venerate invention, I’d question how positive a development that’d be if the form it took was reserving the term ‘genius’ only for scientists who tinker away on contraptions in the laboratories. Bringing any new technology into being – and then disseminating it throughout the economy – is a multifaceted process. It requires the ‘D’ as well as the ‘R’ of R&D, which economist Stian Westlake elegantly describes as “making inventions useful.” For this to happen, skilled managers and shrewd investors who can envision a radically different future are essential. I think Musk’s track record here suggests he’s pretty good at that side of the equation.
I don’t doubt that the esteemed Professor hopes for us all to enjoy a more innovative future. And in light of some of his more controversial opinions, to say the least, I empathise with the desire to take Musk down a peg or two. But thinking that someone’s politics renders their innovative contribution to the world worthless is simply an exercise in failing to decouple and childishness.
💼 Philip Salter, Founder
According to a new NBER working paper, the years spanning 1990 to 2017 saw a less disruptive US labour market than any prior period measured going back to 1880. The reason? General-purpose technologies (GPTs) like steam power and electricity dramatically disrupted the twentieth-century labour market, but the changes took place over decades.
The authors see signs that the US is returning to a period of labor market disruption. The evidence they cite includes the end of job polarisation (the shrinking of middle-skilled jobs due to technological advancements), the stalling growth of low-wage jobs, the increase of STEM occupations from 6.5% of US jobs in 2010 to nearly 10% in 2024, and the decline in retail jobs by 25% over the past decade.
AI is likely the next GPT. And while the full scale of AI is still unfolding, early indicators – such as rapid private investment in AI-related technologies and growing STEM employment – signal that it is already playing a significant role in labor market changes.
So what’s the likely timeline? Remember, it was 1769 when James Watt patented his separate condenser for the steam engine, and 1882 when Thomas Edison opened the first commercial central power plant at Pearl Street Station. So history teaches us that these things can take a while. But this time it could be different – it’s a topic Dwarkesh Patel’s and Tyler Cowen politely, and constructively, disagree on.
Bonus: Speaking of AI, don’t miss our reaction to the AI Opportunities Action Plan, which the Government published on Monday.
🧭 Anastasia Bektimirova, Head of Science and Technology
“When objectives are ambitious, the only reward you’re likely to receive is deception,” write Kenneth O. Stanley and Joel Lehman in Why Greatness Cannot Be Planned: The Myth of the Objective. They go on to say:
“The future that the past created was not the vision of the past, but instead what the past unexpectedly enabled. The genius of the Wright brothers wasn’t to invent every necessary component for flight from scratch, but to recognise that we were only a stepping stone away from flight given past innovations. Great invention is defined by the realisation that the prerequisites are in place, laid before us by predecessors’ entirely unrelated ambitions, just waiting to be combined and enhanced.”
Almost no prerequisite to any major invention was invented with that invention in mind. This gives a basis to challenge, from first principles, whether our typical approach to public funding of science and technology is truly optimised to deliver the highest possible return on investment.
Take the AI Opportunities Action Plan launched by the Government this week. Under the proposed changes, the AI Research Resource (AIRR), our national AI compute infrastructure, “should evolve into a set of Mission-oriented clusters that bring together compute, data, and talent to pursue frontier AI research and other national priorities.” It will now operate under Missions-focused programme directors who will oversee compute allocation. They will “quickly and independently provide large amounts of compute to high-potential projects of national importance, operating in a way that is strategic and mission driven.”
A possible risk here is falling into the trap of the objective and drifting away from the AIRR’s original purpose. We need to ensure that scientists and students in universities have the compute they need for AI research and applying AI-driven methods across different fields, from computational biology to the social sciences. This is foundational for diffusing AI throughout science, including in the fields where it has fewer immediately obvious applications. Against the backdrop of GPU poverty in academia, this is not an unambitious goal. The AIRR shouldn’t be confused with ARIA. The former is foundational research infrastructure, while the latter has a mandate to fund high-risk high-reward research making bets on specific opportunity spaces. We should, as Stanley and Lehman write, “let novelty search roam in an endless maze that stretches to the horizon in every direction.”