Skip to content

Science / Tech

Keep Calm and Adapt

Matt Shumer’s viral essay about AI is part of a long history of fear produced by technological change.

· 9 min read
Keep Calm and Adapt
Matt Shumer (YouTube)

Software can now write software better than even the most advanced human experts. And where software has gone, the other learned professions will follow. That is the gist of a viral essay titled “Something Big Is Happening” posted to X on 9 February by Matt Shumer, who runs an AI start-up. The time for “cocktail party polite” answers to the question “what’s the deal with AI?” is over, he warns, and it is time to confront people with the scary truth: mass technological unemployment is upon us, and devastation of most other kinds of “cognitive work” will soon follow. His argument focusses on software engineering, and on Claude Code Opus 4.6 and OpenAI Codex 5.3 in particular. Both of these models were released on 6 February, and they are now so good that, he tells us, “I am no longer needed for the actual technical work of my job.”

Shumer is right about the new AI coding tools—they are great. I now spend less time coding because I can tell Claude and Codex what and how to code, and how I want that code tested. I have not yet been able to replicate Shumer’s one shot—that is, walk away to come back to a perfect application four hours later. In my experience, AI still makes mistakes and does things I do not want it to do. Nevertheless, the transformative capacities Shumer describes are real.

But while I accept Shumer’s observations, I dispute his inferences. For a start, there’s a lot more to software engineering than coding. Someone still has to ask the models to produce the app in the first place, which requires a degree of background knowledge if you are attempting anything tricky or complex, as is often the case in a large enterprise. More pertinently, Shumer’s broader argument assumes that if machines can perform many tasks, human work will disappear. This seems to assume that jobs are just collections of tasks, but they are also systems of responsibility that involve risk and return. Used well, AI is likely to reduce risks and increase returns. However, history and experience suggest that when the cost of doing things falls, societies do not run out of work. They attempt new things and do new work. What we are likely to see is job transformation not job elimination.

Shumer argues that software was the first learned profession to be solved by AI because AI needs a lot of code and now AI models are being used to improve AI models (per the release notes of OpenAI Codex 5.3). He argues that the technical superfluity that has hit software engineers will soon also hit lawyers, accountants, writers, journalists, customer service workers, and doctors, and that the coming “intelligence explosion” will devastate these professions and many more. But while the virality of Shumer’s post and its breathless urgency suggest that some new and apocalyptic threshold has been crossed, doomy arguments like these have been made for a long time.

In 2013, Carl Benedikt Frey & Michael Osborne released a report titled “The Future of Employment: How Susceptible Are Jobs to Computerisation?” Their headline claim was that 47 percent of US jobs were “at high risk” due to automation. Less widely known was a 2016 OECD paper contesting Frey and Osbourne’s conclusions by Melanie Arntz, Terry Gregory, and Ulrich Zierahn. Jobs, they wrote, should be thought of as occupations made up of a variety of tasks, many of which are not automatable. Approaching the problem this way, the OECD paper reduced the figure of jobs at high risk to nine percent.

There is a long history of fear produced by technological change, dating back at least as far as the early 19th-century, when textile workers known as Luddites smashed mechanised knitting machines with hammers. But while technology can be a threat to particular occupations, in the long run, it creates more jobs than it destroys. The British government treated Luddite vandalism as insurrection and made it a capital offence. Luddite leaders were hanged and many others were transported to the penal colonies of New South Wales and Van Diemen’s Land (Tasmania). The repression of sabotage led to reduced costs in clothing production, and as a result, the textile industry exploded in scale.

Indeed, a surge in demand for wool led to the rapid expansion of the colonies. In his history of Australia, Tony Abbott speaks of the “Wool Rush” following the Napoleonic Wars and preceding the Gold Rushes of the 1850s. Clothing became cheap and abundant. While some Luddite tasks—such as “cropping” textiles until they were smooth with shears by hand—vanished, textile employment overall grew because more clothes were sold. When textile frames were small and artisanal, clothes were expensive and people could only afford a few. When the frames entered factories and became faster and bigger, cloth and clothes got cheaper and people could afford more, which meant that employment increased along with demand.

ChatGPT and the Future of the Professions
Professionals must learn to work with the machines or they will be replaced by them.

It is, in other words, a mistake to confuse “my skill is no longer needed” with “human labour is no longer needed” because new occupations are also created as a result of new technology. In software, what was expensive, difficult, and risky last year is now cheaper, easier, and less risky. But the availability of Photoshop, Premiere, and cheap camcorders did not end media, it caused an explosion of media because the cost of production fell. There is more content than ever in the world because you do not need a studio to produce it anymore, and software will be the same. A game project that would have required an artist, coders, testers, UI designers, writers, game designers, and producers to coordinate all the above can now become a person-plus-AI project.

And many other worthy projects that would have required teams can now become person-plus-AI projects. Two years ago, a serious software project would need a business analyst, a solution designer, a developer or two, a project manager, subject matter experts, a test analyst and a system administrator. Putting all those people in a room creates an expensive meeting and a coordination overhead. Large software projects could be spectacular failures. As the risk, difficulty, and expense of software projects crashes, adoption will surge. Paul Jarvis’s 2019 book Company of One advocated staying small as a deliberate business strategy. AI makes such endeavours easier.

Let me return to coding, which is my line of work and a core part of Shumer’s argument. While it is true that you can get an AI to write you an app. The AI cannot decide whether the app should exist, whether the business should take the risk, whether the legal exposure is acceptable, and whether the user requirement is understood or misunderstood. These are not technical tasks; they are judgments linked to institutional responsibilities and realities. Shumer himself notes that AI adoption “will be slowed by compliance, liability and institutional inertia” in roles that require relationships and trust, physical presence, licences, and permits, particularly in heavily regulated industries.

Software engineering fell first to AI because its inputs and outputs are textual, and because it relies upon immediate feedback loops in simulated environments. As a result, AI is well suited to computing, which is a highly symbolic problem domain. This cannot be said of law, medicine, and management, or of civil, mechanical, and aeronautical engineering. These disciplines are embedded in physical, social, and legal systems that existed long before computing machinery. They have external real-world constraints, not just digital and informational ones.

What I am driving at here is that software engineering is “computationally pure” in a way that the grease and oil of mechanical engineering is not, let alone the blood and guts of medicine or the enraged conflicts of legal disputes. Most economic activity involves a lot more than the symbolic processing that goes on in computers—it still exists in the physical world of atoms not the digital world of bytes. Even software development is as much organisational negotiation as programming. The hard part is discovering what the system actually needs to do, not writing the functions that implement it. Lack of clarity and incorrect assumptions in requirement specifications have led to software disasters.

This matters because Shumer is assuming that if AI can do code, then it can do everything. It would be more accurate to say that if a task is already reducible to symbolic processing by a computer, then AI will be able to do it well. But there is a moral and legal point too: jobs are not just collections of tasks, they come with responsibilities. Someone has to stand behind the decisions taken, and if necessary, stand in court. Professions such as accounting, law, and medicine have duties of care, duties to the client, duties to the state, and requirements to observe professional ethics. A lawyer is not just a drafter of contracts. A doctor is not just a diagnosis engine. A programmer is not just a typist of code. Professionals are actors in normative systems that can be held responsible. Shumer is arguing AI will reduce labour demand for software engineers. I am arguing AI will reduce software costs, difficulty and risk, and that huge opportunities are about to be created as a result.

In 2010, the Queensland Health introduced a new automated payroll system for around 80,000 hospital workers. The original budget was AU$6 million but it ended up costing more than AU$1.2 billion, and thousands of hospital staff were overpaid, underpaid, or not paid at all. The problem was not that the computer could not calculate wages. It was that the complex rules of the relevant awards—such as penalty rates for shifts, overtime due to extra hours, local rostering practices, and negotiated exceptions—were not properly understood before they were encoded. Code was produced in a compressed timeline. Lack of testing resulted in chaos. The system went live without a parallel pay test (where the old and new system calculate pay for the same period, side by side). Shumer’s argument implicitly assumes that coding is the difficult part of software engineering but the Queensland hospitals payroll case demonstrates that understanding what the system has to do and testing it are critical too.

AI coding tools pose some risks (e.g. privacy and security breaches) but they also provide ways to mitigate risks. Most importantly, they fundamentally change the economics of writing software. The cost, time, and risk of writing software is about to collapse. The cost of big meetings to coordinate complex projects will disappear. And yet panic about the obliteration of software jobs strikes me as baseless. There have always been more worthy software projects proposed than could be developed. Most were rejected because they were too hard, too expensive, and too risky because they needed an expensive team and a long timeline. Unable to get software written, people hacked out manual fixes with spreadsheets, emails, and shared drives instead. I have never heard of a worthy software project that failed to get a green light because it was too easy, too cheap, and had too little risk.

Shumer is right to argue that now is the time to attempt an ambitious project. He is right to point out that humans using AI will outcompete humans not using AI. It is also true that people and companies will need to adapt as AI shifts work from execution to specification and supervision. And Shumer is right to say that people should make more serious use of AI at work. Get used to supervising your AIs. Try giving them substantial tasks, not just simple questions. Now is the time for software engineers to get into the code models, though most are already doing so. But I see no compelling reason for chefs to be supervising AIs anytime soon. 

Yes, there will be disruption and some jobs will be lost. But Shumer is almost certainly wrong about extinction. As yet unknown and unimagined new jobs and new industries will be created. History shows that adjustment to economic shocks takes time and can be painful. I am not going to dispute Shumer’s view you should be financially prepared for a sudden redundancy. Regardless of AI, it is prudent to have a nest egg in case your income ceases at short notice. But overall, AI expands the set of solvable problems and will reduce the cost of access to timely and relevant knowledge. 

AI is not about to eliminate work. If you are a knowledge worker, it is going to change your job by lowering the threshold at which problems become economic to solve. It will also lower the cost and risk of software projects. But this is not going to make software engineers extinct. The place I work has a mountain of software problems and a limited budget, and with AI I can now get a lot more done. We will see is a shift from expensive inflexible “Software as a Service” (SaaS) and “Commercial off the Shelf” (COTS) products to custom-written applications that do exactly what one company needs. Mass production made clothes cheaper. Not only will AI make software cheaper, it will also enable software engineers to act more like tailors, fitting the application to the exact needs of a client, at much lower cost, difficulty, and risk than last year. Instead of “off the shelf” systems that are a rough fit for their job or business, people will be able to get bespoke systems that are an exact fit. When the cost of something falls, societies often want more of it. This was the case with clothes. It will be the case with code.