Thoughts on AI and software development - Part 3
Consequences and unresolved questions

Thoughts on AI and software development - Part 3
In the previous post, we looked at the want side regarding Steve’s projection and the forces they trigger. We took off the rose-colored glasses and tried to have an unembellished look at these forces even if the things we saw, were a bit more gloomy and controversial than we would have liked it.
In this post, we will continue our analysis by looking at the likely short- and mid-term consequences of such a future becoming reality – also without any sugar coating.
A few words of clarification
While we refrain from the sugar coating, a few words of clarification before we get started with this part of the analysis:
I know that especially the AI advocates are very quick at calling everything “too pessimistic” that puts AI into a potentially unfavorable light and everyone a “doom-sayer” who does not blindly adhere to the AI promise of salvation. After all, the players in the market use the same divide-and-conquer tactics many politicians used over the centuries: Divide the people in segregated camps that fight each other. This allows you to do whatever you like without any relevant resistance because the camps are busy fighting each other.
And the tactics work just too well: Both sides cheer everything that fits their camp beliefs while they revile everything that does not fit into their camp beliefs.
Therefore, to be completely clear: I could not care less about both sides and all their camp beliefs!
I am neither pro AI nor against AI. AI is a tool which exhibits a set of new options – for the better or the worse. I look at the options and evaluate their potential and consequences. And while I am aware that it is basically impossible not to be biased at least a little bit, I always try to step back and start my evaluation with an as unbiased mind as possible and see where it leads.
From a technological perspective, I find AI – including GenAI – very fascinating. I witnessed the previous AI hype wave in the late 1980s and early 1990s. I remember all the problems, we had getting a neural network to distinguish just a few quite simple low-resolution images. If I then see what neural networks are capable of today, it is just “Wow”!
However, if I look at the market dynamics and the behavior of the players, I am not that fascinated. This part rather puts me off. If I observe how easily the whole market falls for the tactics of the players without noticing that they are just puppets in someone else’s play, it leaves me in a state between being amazed and sad.
What I am doing here is just what I wrote before: I try to examine and extrapolate Steve’s projection about AI agent fleets taking over software development in the next year. I look at the contents of the projection, ponder their consequences and extrapolate them into the future. Nothing else.
If this extrapolation should not look as you want it to look, please do not complain to me. It is just the result of a game someone else set up. Hence, please complain to them.
With that introduction, let us jump in and ponder the consequences of Steve’s projection …
Pondering the short-term consequences
What would it mean if Steve’s projection would play out exactly as he described it: What if software development would be done in the future by a few junior developers, observing and guiding a fleet of AI agents, acting more like supervisors than actual software developers?
The most likely short-term consequences for the IT market would be:
- A lot less developers would be needed. I.e., millions of developers would be laid off, more likely the senior developers than the junior ones because “they are more open for the signs of the times” (in more honest words: “they cost less and do not ask critical questions”).
- The whole IT service provider industry that specialized on outsourced software development of any kind, would collapse (including the company, I currently work for). Why should anyone pay money for external service suppliers if they can do the job on their own easily? (Junior) software developers who can be hired for a low salary, are available en masse (see previous point). More people are being laid off.
- A whole media, conference and training industry exists that is focused on presenting and teaching (junior) software engineers software development, new programming languages, new versions of programming languages, new libraries and frameworks, new development tools, and so on. This industry would collapse, too. If nobody codes anymore, all this becomes irrelevant – and only room for a few to pivot. More people are being laid off.
- The same would happen to the software development tools industry. Who still needs IDEs or other tools that make humans more “productive” in their daily software development routine if AI agents took over the job? Consequentially, this industry would also collapse. More people are being laid off.
- As a consequence of all the people being laid off, salaries in the IT industry would collapse. Why paying any reasonable salary if hundreds of people apply for a single AI agent fleet supervisor job offer?
- This in turn would lead to a sharp decline of people studying computer sciences. No money to be made. Questionable career opportunities. Highly uncertain future prospects. The people currently studying computer sciences would try to change their subject. Hardly any new students would show up. Most universities would soon shut down computer sciences as a subject. More people are laid off. A whole branch of academia basically gone. No software developer offspring.
These are the most likely short-term consequences if you take Steve’s projection and simply extrapolate it: Developers won’t be needed anymore. Education, training and the accompanying institutions and media won’t be needed anymore. Tools supporting humans in software development won’t be needed anymore. Only AI agent fleets and a few underpaid “fleet supervisors” will be left.
From a financial perspective, the consequential end of this projection would be the redirection of most of the money companies currently invest in software engineers into the pockets of the shareholders of the companies that provide the AI services, the AI agent fleets.
This means, as an AI solution provider or investor behind the solution providers I would do anything to foster the AI hype and FOMO, making sure nobody finds any time to breathe and think until this projection is reality.
Unpredictable code quality
There would be more short-term consequences. E.g., most AI coding solutions at least currently act like the notorious “Stack Overflow developer” on steroids: They basically stitch together code fragments they have seen in various places on the Internet (because that is where the training data for the underlying LLMs came from), in the worse case until the code compiles, in the better case until the also self-created tests pass – but without really understanding the code created and its quality properties.
Hence, at least in the short run, such an AI agent fleet will act like a team of over-motivated “Stack Overflow developers” which will have a hard to predict impact on the quality of the software created.
The human fleet supervisors will be of little help. Remember, they will be massively underpaid. Thus, why bother and put yourself under stress trying to ensure a good software quality if you basically earn as much money as the receptionist – especially if your boss expects you (or actually your agent fleet) to complete a huge pile of requirements every day. Human software developers have been fired because the AI agents are better and cheaper. So let them do whatever they want. You only make sure the pile of requirements is “done” at the end of the day.
Over time, the code generated may become better in terms of runtime reliability, i.e., the likelihood of buggy code will decline (even if there probably are limits to this progress – see the section about unresolved side effects in the next post (link will follow)). But looking at the short-term consequences of the AI agent fleet projection, the quality properties and runtime reliability of the code generated will be questionable.
Moving on to mid-terms consequences
So much about the most likely consequences if the projection would play out exactly as Steve described it. However, if we take this short-term projection and look a bit further into the future, we realize that things are not that simple.
In the mid-term, companies who fired all their developers except for a few AI agent supervisors will find themselves outpaced by competitors that made a different decision: Those other companies used the new possibilities to level up their game instead of shedding off a few salary dollars from their balance sheets. They kept their developers and provided them all with AI agent fleets, this way creating an IT that can create much more software per unit of time.
All their developers can implement new features much faster because every developer they have is now an “AI-supercharged” software developer. And most likely, they did not only equip their developers with state-of-the-art AI support but also everyone else in the company. This way, they can move a lot faster than the companies that short-sightedly fired their developers.
As a consequence, everyone needs to level up their game to the level of those “AI-supercharged” companies to not fall behind. Everyone needs to run as fast as they can to not fall behind – a typical case of the Red Queen’s race. 1
However, that means that the companies who fired their developers in the first place will need to hire them back because a single person can only channel a certain amount of information per day, no matter how powerful the AI agent fleet might be.
The abstraction fallacy
Some people – especially the AI advocates – then argue that this hiring back would not be necessary because AI agents would help us to do software development at a much higher abstraction level and thus it would make software engineers orders of magnitude more productive, would make them implement orders of magnitude more features per day. Therefore, a few AI-powered software engineers would be able to do the work of hundreds of non-AI-powered engineers.
Even if we leave out that the Red Queen’s race effect will still apply, I highly doubt that claim because we failed to reach that higher abstraction level for at least half a century, no matter how hard we tried. I mean, it is not that we did not try. We did! We are pushing for decades meanwhile to raise the abstraction level but we never got beyond the level of a 3GL programming language. We tried different kinds of programming languages, we tried 4GL, we tried MDA, we tried different kinds of graphical programming languages, we tried rule systems, we tried Low code/no code and much more to raise the abstraction level of programming, to empower the “citizen developer”.
But we failed! We failed time and again as a whole industry. All these really smart approaches to raise the abstraction level of software development never caught on except in a few niches. The problem never was the coding part. The problem always was the part before the coding starts, mostly due to the complex (and sometimes chaotic) structure of human behavior and interactions.
We would only achieve a higher abstraction level if, e.g., business departments would give up insisting on all their special requests and would agree on creating coherent, non-conflicting demands. Before you start wailing “But the customers!”: The vast majority of conflicting, incoherent demands is not related to customers, their experience and their demands. Most of them are rooted in internal peculiarities, carefully protected habits and kingdoms, lack of internal collaboration, short-sighted thinking, and alike. More often than not, customers suffer from a worse experience due to this internal incoherence.
But this how things work in most companies. Features are requested at arbitrary detail levels, not caring if they contradict each other or are in conflict with the existing system. As a consequence, the requirements that will hit the vibe coding developers with their AI agent fleets will almost certainly remain at the same abstraction level they are for more than 50 years meanwhile.
Therefore, a higher abstraction level is not to be expected – at least until the AI agents start to watch “2001: A Space Odyssey” more regularly and start to answer “I’m sorry Dave, I’m afraid I can’t do that.” if someone requires a conflicting or contradictory feature. 2
But let us assume, we were able to solve the problem of conflicting and contradictory requirements. In this (unlikely) case we would immediately realize we do not require any big scale AI anymore because we solved the problem about 50 years ago: The solution is called “standard software” (also known as “COTS” for Buy and “SaaS” for Lease).
Just buy or lease some piece of standard software for of your 80%-90%+ non-differentiating IT. You can even use it without years of customization and tons of code written in some crappy standard software specific language. Just buy or lease it. Use it. You are done. Your IT department productivity just went through the roof because all the software engineers can focus on the 10%-20% of differentiating software.
Individual software development only exists at the current scale (and based on that the cry for AI-based software development to speed it up only exists) because business departments insist that their needs are “so special” and “so different” that standard software will not suit their needs – even if they are not most of the time.
To raise (individual) software development to a higher abstraction level, first of all the business departments would need to give up their self-perception of being “so different”. To be clear: I do not assume any malicious behavior. But change is hard and if it is possible to bend the software towards your year-long developed and nurtured habits instead of adapting your habits towards a default non-conflicting and non-contradictory way, most people choose to bend the software instead of changing their habits.
Additionally, we would need to get rid of quite some inefficiencies outside and inside software development (see, e.g., my “Simplify!” blog series) for more information about this topic). Then we would be able to raise software development to a higher abstraction level including the expected productivity gains. However, based on my experiences, this is not in sight.
Same, same, but different
Hence, no higher abstraction level in sight with AI agent assisted vibe coding. Companies will need to hire the software engineers back, they previously laid off, now labeled “senior AI agent coordinator” or whatever, probably at a higher salary than before their layoffs.
With that, we will mostly likely start the next round of the game, only that the companies now will need to pay the probably biggest part of their IT budgets to the AI providers. So much code will have been written by AI agents in the meantime without actually looking at the code itself that big parts of the companies’ system landscapes basically will have become unmaintainable without agentic AI support.
The introduction of the agentic AI-based software development fleets will also result in a kind of deskilling of the human software developers as Grant Williamson noticed on Mastodon:
“I need to be very clear, that the push towards ‘vibe coding’ – that is, deliberately deskilling people – is because AI code assistants are an (increasingly expensive) subscription service.
If you know how to code, you can just write Python, C, Java, R, PHP, whatever for free and make things. You may not own the tools of production, but at least you’re not renting them.
If you have been deskilled so you only know how to vibe code, you will be paying for that privilege forever. […]”
This means, we will get an interesting self-reinforcing effect:
- Lots of code has been created by agentic AI solutions that can only be understood and modified using agentic AI support.
- Software developers need to rely more and more on their AI tooling.
- Based on that, their own skill set deteriorates because they code less and less on their own.
- This leads to even more code that can only be understood and modified using agentic AI support.
As a consequence, it will become impossible for most companies (and developers) to develop any software without using – and paying for – the agentic AI tools. Additionally, due to the ongoing Red Queen’s race, everyone will need to upgrade their AI capabilities further and further. This means, the companies will need more and more agentic AI support to write and maintain code – probably an order of magnitude or two more than expected in the short run.
Again, as an AI solution provider or investor behind the solution providers I would do anything to foster the AI hype and FOMO, making sure nobody finds any time to breathe and think until this projection is reality.
Third interlude
In this post, we looked at the most likely short- and mid-term consequences if Steve’s projection would play out exactly as he described it. We saw a quite puzzling picture:
- In the short run, most of us will probably lose our jobs and a whole industry including education system will break down.
- In the medium run, we probably will be hired back – with a different job title and probably a higher salary.
- Software development will become 100% dependent on agentic AI fleets.
- Companies will need to use more and more agentic AI in their software development to stay competitive in highly saturated markets. This means, they will compete for the same overall revenue (which is a core property of a saturated market) while additionally paying a significant amount of money to the agentic AI solution providers.
No matter how we turn it: In this projection, the only big winners would be the agentic AI providers and their investors. Everyone else would lose one way or the other.
Hmm, this does not sound too desirable. But that is where we would probably end up if Steve’s projection would play out exactly as he described it.
Even if we all know that things rarely play out exactly as someone describes them – neither Steve’s projection nor my extrapolation – there is still a good chance we get closer to this future than we would want to. Remember, the profiteers of such a future (comprehensibly) game the market with all they have and most people fall for their tactics.
Therefore, we need to look further into this possible future and its most likely consequences to understand better how to hedge our options in such a future or a similar one. In the next post (link will follow), we will complete the analysis by looking at some unresolved side effects and questions that would arise from such a future before looking at our hedging options in the fifth and final post of this blog series. Stay tuned …
-
Of course, there would also be other (and much better) ways than “supercharging” everyone with AI to be able to move faster (while reducing costs significantly at the same time) as a company as I laid out in too many previous posts to link here. But as long as companies only look at efficiency when it comes to “improvement” and neglect the effectiveness of what they are doing, “supercharging” employees with AI to move faster will be most likely the way they choose. Additionally, it probably is the least risky path for the decision makers involved which is a big driver as we have discussed in the previous post. ↩︎
-
For a more detailed discussion of this problem, see my “Software – It’s not what you think it is” blog series, especially the second part. ↩︎
Share this post
Twitter
Google+
Facebook
Reddit
LinkedIn
StumbleUpon
Pinterest
Email