Software engineers often gravitate towards projects with minimal non-software components and minimal direct interaction with the real world. This tendency, akin to searching for lost keys under a streetlight, stems from their desire to avoid real-world bottlenecks and business risks that can leave them feeling idle or powerless to influence the fate of the product they are working on.
It's well-documented that the financial industry has grown disproportionally and its present GDP share is much bigger than its actual contribution to the economy. However, this over-bloated financial industry still fails to protect the economy from major crises, such as the Great Recession of 2007-2008.
I'd conjecture that the same happens with software.
Nathan Marz of Red Planet Labs suggests radically simplifying the development of scalable web apps (by 100x in terms of software engineering effort, while also generally making systems more reliable and efficient) by cutting through bloated and over-complicated database programming paradigm. Software engineers don't seem to be overly excited about this. These developments can reduce a lot of well-paid software engineer jobs at large companies. Software engineers act as bureaucrats who hold to their power by retaining the headcount.
This “bureaucratisation” of the software industry may seem like a (relatively) benign way to distribute resources in the economy. This is how banks have established and grown their compliance departments in part as a “social responsibility” response, to employ a lot of accountants and other back-office workers whose jobs became unnecessary with the computer automation of finance and accounting operations in the 1990s.
Alas, bloated, inefficient, unreliable, wicked software will be written by human software engineers to increase their job security. The side effect of this is that the software brings less value to its users and other stakeholders.
Open-source business models also incentivise companies to create software that is difficult to operate because then these companies can sell their support services. Open-source software companies are also incentivised to create too complicated and/or huge service APIs (or just new interfaces, standards, and protocols, leading to the fragmentation of standards) to lock in their customers by making it harder for competitor companies or open-source enthusiasts to re-implement these APIs.
To be clear, I don’t claim that professional programmers always write software as complicated and bloated as possible and are not motivated by the real-world value of their products. This would be absurd. Of course, many programmers care about the elegance and simplicity of the software they are creating. I also don’t think that most programmers overcomplicate software consciously and deliberately.
Creating ideally simple software for the task, i.e., with no accidental complexity at all, requires a lot of cognitive effort in itself which may be impractical to expend, given the expected lifetime of the software. So, practically created software is expected to have a little accidental complexity.
However, at the end of the day, it appears that the vast majority of software systems (created by human programmers today) are significantly overcomplicated beyond that optimum. And many important systems are over-complicated astronomically.
So, regardless of whether this should be explained by bad incentives or simply cognitive limitations of human programmers (creating simple software requires thinking a little harder), I argue that we, human software engineers should replace ourselves with AI software engineers as soon as possible and as fully as possible.
AI programming is coming. AI will be able even to make sense of large swaths of spaghetti code, scattered across dozens of files in a million-LOC codebases. AI could soon write 99% of new production code, but will not do so in certain industries, as described below.
Software-regulated industries
In areas and industries where it will be mandated that all code is still reviewed and vetted by people, such as aerospace, the hardware designs and the safety standards will likely grow only more complicated, in part because systems and software engineers (humans) assisted by AIs could “hold together” larger and more complicated designs. Safety specifications and protocols also tend to grow ever more complicated.
Apart from aerospace engineering, this may happen in nuclear, power systems, and medical device engineering. This will unlikely make this software noticeably safer and more reliable (sans significant breakthroughs in AI-assisted automatic systems verification), but will surely increase its development and maintenance costs.
Fortunately, this seems unlikely to happen in car autopilots, where Tesla already replaced complicated autopilot software with end-to-end NNs (i.e., software 2.0), and regulators didn’t object.
Other industries
In the industries without software certification for safety or strong “human programmer lobby” (but there are few such industries: perhaps, OS, compiler, and database engineering), when it will become evident for the business that AI can program better than humans and debug software better than human and explain the behaviour of the software to stakeholders (including by writing pseudocode and drawing architecture diagrams if needed) better than humans, the business should eventually dismiss almost all human programmers from their jobs entirely, so that humans don’t even review the code written by AI.
I’m unsure about the strength of the “human programmer lobby” in the financial and banking industries (and not everything depends on this lobby). But it would be very unfortunate if finance and banking succumbed to the paradigm where people are required to review and vet all code, as described in the previous section.
Presently, in most software development teams, only human programmers themselves (rather than product designers, product managers, or business analysts) ultimately hold the most complete view of the functional behaviour of the software, let alone its operational characteristics. To dismiss human programmers, the model of software’s behaviour should be externalisable (e.g., to documentation), recoverable from source rather than programmers’ heads, and explainable to people at any requested level of detail, all by AI. So, I think it’s a good idea to develop projects in this area, such as AI tools for generating, improving, and finding inconsistencies in software documentation, software architecture diagramming, etc.
Governing software complexity with metrics
After the business gives up the idea that humans should understand all the code that operates the products, the complexity of their stacks should be governed by a suit of software complexity metrics. I think it’s probably a good idea to develop new such metrics today.
Governing software complexity is important not just internally for the business, e.g. because hard-to-predict emergent failure modes may stem from complicated interaction of too many components, and so that AI doesn’t accidentally write software it won’t know how to fix later (cf. Kernighan’s Law).
The users and other stakeholders of software products should also govern its complexity because, as I noted above, over-complicated software tends to be less efficient and harder to maintain and operate, i.e., have higher a total cost of ownership. Without external oversight, it will be too easy for software vendors to argue that their software has minimal accidental complexity and externalise the overhead to the users.
Open-source software
I expect rapid commodification in the open-source ecosystem. Open-source software vendors will lose their clout if “AI SREs/DevOps/DBOps” emerge that can operate their software as well as humans.
The software shouldn’t necessarily be no-configuration, low-configuration, and “self-running” by design: it’s fine and relatively cheap for an LLM-based AI agent to operate the software. But it’s important for the software to have a good observability harness, or perhaps even first-class thinking about the accompanying SRE agent. I think it’s a good idea to develop frameworks for creating such SRE agents for various bespoke software from source, docs, tests, and telemetry.
When choosing open-source software these days, I’d recommend paying attention to the quality of its observability harness, as well as the helpfulness, understandability, and configurability of its logs, or its instrumentability more than before.
One of the important effects of open source software is the reduced cost of software creation thanks to the re-use of components, as well as sharing the competencies of operating certain software (such as databases) between companies, considering that growing such competencies in human operators and SREs is expensive and takes a long time.
With frameworks for building AI software operators, this should get much cheaper and faster, hence it will make more sense to more bespoke, integrated software for the needs of a particular product. At the same time, software integration permits reducing the integral complexity of the system, as Nathan Marz observes here.
Thus, it’s also plausible that open-source software, unlike API standards and specifications, will deteriorate. Instead, AI programmers will tend to create “collapsed” software bundles (“monoliths” if you wish) tailored for the specific task. Generally, this should be a good thing, albeit retaining some affinity to a limited number of software architecture patterns will still be important for explaining software to humans.
The reaction of human programmers
As long as human software engineers are employed by companies, they will have an incentive to grow the complexity of their software stacks for job security. Thus, for the benefit of the stakeholders of these companies, they should shred human software engineers as quickly as possible. This is what Musk did when he bought Twitter, but less extreme. Or perhaps, this fast reaction was the only way to perform the “surgery” and not trigger an “immune reaction” by the organisation that may have killed it.
Unfortunately, this is not for the benefit of these software engineers and their families. Furthermore, software engineers often don’t have a lot of transferrable skills.
The reaction of many software engineers will be to start their businesses to develop software products.
As per the observation at the beginning of this post, these products will be biased towards being software-centric, i.e., software for software that manages other software, thus still promoting the software bloat and excessive (suboptimal) level of competition in these software-centric areas industries such as database engineering, observability SaaS engineering, DevOps and CloudOps engineering, etc.
I don’t see a policy measure that could effectively counteract this dynamic. But everything that makes real-world-facing product engineering broadly smoother, easier, and less frustratingly slow for programmers should help, such as:
Better and cheaper sensors and other robotic and IoT components
Open hardware standards and interfaces
Better customer feedback systems, perhaps borrowing or developing some ideas from Pol.is
Temporal pattern recognition, pattern categorisation, anomaly and emergent behaviour detection AI on top of raw multimodal sensor data (video, audio, telemetry). Example: Motif Analytics, but think about the same ideas applied to robotics, hardware products, and swarm systems analytics.
Systems for temporally joining multimodal data and metrics from different systems and sources operating at the same time and place (or used by the same company or human), with causal inference, anomaly detection, and root cause analysis on top of these correlated data streams. This should make it easier to debug unanticipated system interference (or seize the benefit of unanticipated system synergy) in real-world deployments.
And even something relatively extravagant, such as decoding animal languages which may create new markets of product markets for animals (I’m not joking).
Summary: the simplicity/acc manifesto
Apply AI power to create simple software.
Create more a la carte tools (such as debuggers, observability, modellers, simulators, security analysers, verifiers, AI-first DevOps, CI/CD tools) to empower AI to create, maintain, and explain simple software more reliably and effectively.
Create more real-world-facing software than software-facing software.
Make it easier for system designers and developers to receive and account for the diverse feedback from the real world and the stakeholders.
Spend the software complexity “budget” on the essential complexity of accommodating diverse, interacting users’ and stakeholders’ needs rather than on the accidental complexity of “self-consumed” software.
P.S. Thanks to David Heinemeier Hansson, Rich Hickey, and Nathan Marz (in alphabetical order) whose writing and presentations inspired a good part of my thinking above.
This is one of the stupidest commentaries I have ever seen.
It makes horrible assumptions, including that AI is smart enough to replace a software engineer. I tried, it cannot. It can assist in a great many ways, but thats it.
More than once you state that human programmers, on purpose make their code difficult to read and maintain. I have never seen that happen on purpose. Usually these things come about because the business changes its mind or never knew what they really wanted and the constant waffling back and forth causes technical debt.
I don't see this changing just because AI is now the programmer.