Thoughts on AI and software development - Part 2

The real forces that drive markets and decision makers

Uwe Friedrichsen

17 minute read

(Humanized) steel crow sculptures

Thoughts on AI and software development - Part 2

In the previous post, we looked at Steve Yegge’s post where he made a projection from the vibe coding of today to controlling AI agent fleets in the near future that take over all coding reliably. Pondering this projection as a possible future, we realized that this future is not what we need as our actual problems in software development do not lie in a lack of developer productivity (or to be more precise: developer efficiency) but somewhere else.

However, we all know that decisions are not necessarily driven by what we need. Therefore, we we also need to explore the other forces that drive market movements and decision making. This is the focus of this post.

… but a highly wanted future …

We saw, we do not need the future, Steve projected. But even if we probably do not need it, a lot of people want it. As I pointed out in my “The need-vs-want dilemma” post, there is a big difference between what people need and what they want. In short:

  • Need is about the future, longterm well-being and sustainability.
  • Want is about the present, convenience and instant gratification.

If people need to decide between need and want, they usually go for want. They want a quick pill to fix their (often complex) problems, not a major change. This behavior always reminds me a bit of the quote by H. L. Mencken:

“There is always an easy solution to every human problem—neat, plausible and wrong.” 1

Even if we know in the back of our minds a solution for our given problem cannot be that easy, usually we go for the quick and easy solution. It is so tempting. It aligns so much better with our human instincts which were crucial for our survival as a species in the early days of mankind – and often mislead us these days. It is so much better accepted by most other people around us as they are driven by the same human imprint.

What does that mean for the evaluation of the future, Steve sketched?

Well, if we look around, we face an industry that – no matter how innovative it calls itself – has a hard time adopting to changing needs. As I discussed in my “People only change in pain” posts, humans usually only change their behavior and their habits if they feel very uncomfortable with their perceived situation. The same is true for organizations as they are run by humans. The consequence: Old habits prevail. Organizations want the “proven” success recipes, no matter if they still make sense or not (here more efficiency).

This also affects the decision makers: Most managers are confronted with an environment that penalizes taking risks. If they would really do something we need, it would include breaking with the commonly accepted “best practices”. This would make them vulnerable up to the point where their whole careers were at risk. But if they stick to the commonly accepted “best practices”, their resume only gets a little stain if they should fail – and most of them are smart enough to put someone between them and the problem upfront who they can blame if things should not work out as expected.

But if they take a route outside the well-trodden and commonly accepted paths, it is all their risk and everyone from the employees to the shareholders will blame them personally for “putting the company at risk with this obviously crazy idea”.

Additionally, top-level managers are confronted with a middle management that has not yet made it to the top and typically values every type of change as a risk for them, their status and their careers. Thus, even if top-level managers would understand the need to radically change the course of action, they would be faced with resistance and hidden sabotage from the middle management that fears for its future status.

To aggravate the situation, the higher management ranks are incentivized only for short-term “successes”, typically measured in revenue or profit gains within 12 months or less. Their goal sheets do not factor in if the company will die a miserable death in a few years due to the short-term “successes” they accomplish. Even more, usually it is irrelevant for their careers: Before the effects of their acting become apparent, they usually got an even higher-paying job in another company due to their “obvious drive for success”. In other words: The consequences of their acting neither affects them nor are they associated with them. A win-win situation of a twisted kind.

As a consequence, most decision makers will push for “more of the proven”, for the want, not the need – which makes it mostly irrelevant, what we need, I am afraid. They will push for “quick solutions” that require as little change as possible. They will push for commonly accepted solutions, adhering to the “You don’t get fired for <doing something that is commonly accepted as best practice >” mantra. They will push for the quick pill that eases the pain, not a change in lifestyle that eliminates the cause of the pain for good.

It does not matter if we like it or not. It even does not matter, if it makes sense or not. Market (and thus company) reality is just about money and not being liable if things go south, as cynical as it may sound. 2

To be clear: I do not necessarily blame those people for acting like they do. I only blame those who are only motivated by their own careers, not caring about anything else – or to be more precise: Pretending to care about the company and its goals but actually only caring about themselves and how to advance their own careers, ignoring or even fighting everything else.

Nevertheless, there are many other decision makers around that do not act so narcissistic. They may also care about their careers. But they also care about their company, how to support its goals and its wellbeing. However, they are also trapped in the game I described before. Probably, they are not the hidden middle-management saboteurs. Probably, they would act differently in a different environment. Still, they have to adhere to the unwritten rules of the game.

… fueling the endless cry for more efficiency

This means, we are stuck with the cry for more efficiency because this is what is accepted as “best practices”, because this is what counts as an upper management “success” – even if we know that it is a blast from our industrial market’s past. This increase of efficiency is exactly what Steve’s post suggests: Developers creating 10 or 100 times more code in the same period of time by vibe coding with a fleet of AI agents, perfectly targeting all those willing or unwilling efficiency disciples. Yummy!

Code quality from a CIO’s perspective

You may remark that the code, those agent fleets will create will be quite crappy and probably quite insecure. However, from a CIO’s perspective, this is not an actual issue: For them, it does not make any difference if they get their crappy code from some cheap outsourcing companies, an average local company (yes, everyone sometimes delivers code, a fifth grader can hack in a few seconds) or a bunch of junior developers with their fleets of AI agents.

For a CIO, it remains the same problem that needs to be managed – maybe at a bigger scale but then, this approach also gives them some bigger-scale advantages on the other hand: They can claim, they have increased their “productivity” massively. It does not matter, if they produce much more waste in much shorter time. They have a simple solution for the permanent allegation of the other departments that “IT is too slow”. With such a code creation speedup they can push back and invert the blame game. Lots to be won for them.

Hence, also from this perspective we cannot count on the companies’ decision makers to aim for what we need. Most of them will push for the “proven” as this is the path, they are (highly) incentivised for. Everything else would be career suicide – or at least highly risky – for them most of the time. This increases the likelihood that the demand side will fall for the supply side pied pipers allures.

Determined to succeed at any cost

This brings us to the supply side. What can we observe there?

Could it be that we miss something and they are in for the greater good? I mean, with so many often very smart people investing tons of money in making this vibe coding future a reality, there must be something more to it, doesn’t it?

Well, I am afraid, there is not a lot to be expected from them. We know that the most important of the abundantly advertised “7 habits of highly successful people” is to be “100% focused” on their mission. For a startup founder this means being completely focused on the notorious “hockey stick”, i.e., making their business highly successful. For a top manager of an established IT product company this means being completely focused on increasing the shareholder value of the company.

While there is nothing wrong with this focus per se, it comes at a price. It means, such “100% focused” people push everything, every thought, every person aside that could stand between them and their success, their “hockey stick”, their increased shareholder value. Success for the sake of success.

Therefore, it is not surprising that quite some of the most successful company leaders would very likely qualify as psychopaths from a psychologist’s perspective: Often very charismatic people, yet void of any empathy and willing to do anything to achieve their goals. Some of them are willing to happily destroy the lives of millions of people if it increases their personal wealth or even just strokes their egos.

We currently see it with AI: The company leaders are not only willing to reap your hard-earned wisdom you were incautious enough to share with the world, deliberately ignore your copyright, sell it as their own “intellectual property”, and thereby destroy the basis of your existence. The most prominent ones even actively use their status, money and power trying to talk governments into changing the laws, making copyright infringement legal, depriving you from the rights you still have to defend yourself in the face of such practices.

In other words: To reach their goals, to boost their wealth, status and maybe also ego, these people do not only consciously accept that millions of people may lose their jobs and their existence based on their acting. They even actively push in that direction, trying to make sure the affected people do not have the rights anymore to do anything against it. If this kind of behavior does not qualify as psychopathic, I do not know what does.

Additionally, if those people are successful, they often become venture capitalists on their own with the money they made, making sure that only people like them, the “100% focused” ones enter the game.

Therefore, I am afraid, there is no bigger plan for the greater good behind all this. It is just about people who are mostly free of empathy and would do anything for their status, influence, wealth and success. As written before: Success for the sake of success. No bigger plan for the greater good. Nothing. Only the question how to make as much money as possible in as little time as possible.

Currently, the most promising tech path to much money in short time is building some kind of AI solution and telling the company decision makers of the world that this AI solution will magically solve their problems and make them more successful. Hence, those in tech who are “100% focused” on success currently take this road.

Again, to be clear: I do not necessarily despise these people. Some I despise, those who nurture and exhibit their psychopathic traits as if this were some kind of distinction to be proud of. But most of the people just try to be successful in a broken game, a system that spun out of its equilibrium a long time ago and now drifts further and further into an unhealthy extreme.

A self-fulfilling prophecy

As written before, most of the people who are determined to be successful in IT currently take the “build an AI product and convince decision makers that it will solve all their problems” road because it feels like the most promising road at the moment. Part of this approach is that all those who took this road – the startups, the established IT product companies and the venture capitalists – throw their money, status, influence, media connections and everything else they have on making their idea successful.

While this can be considered normal entrepreneurial behavior (which it is to a certain degree), due to the sheer amount and wealth of the players involved, this behavior reaches a scale where it becomes a self-fulfilling prophecy, especially because the players perfected their media game over the years.

They know exactly which buttons to push to maximize attention. They know how to leverage the media outlets as perfect and just too willing amplifiers for their messages. They know how to keep the flood of “news”, “success stories” and “breakthroughs” pouring at a level that even the the strongest-minded resisters eventually start to ask themselves if they might miss something, that there must be some truth to the messages sent.

As a consequence of this perfected media game, it does not matter anymore if the underlying idea makes any sense or not. The amount and volume of the marketing din drowns all such considerations.

It is a bit like if you trade too high volumes of a financial product and thereby start to influence the price of the product. Here we see the same effect: Lots of very influential players who are perfectly connected to every media outlet flood the market with their “AI will solve all your problems” messages to an extent that it feels like the only message left and becomes a self-fulfilling prophecy.

Even the most skeptical high-level company managers will eventually come to the conclusion that they must “do AI”: Everybody is talking about it, glorious “success stories” are published in a row, “ground-breaking breakthroughs” are reported every day, the analysts jump on it (because they have to – it is part of their job), IT conferences make it their main topic (and try to get speakers from those companies that send the messages – what a subtle irony) and nobody wants to stay behind. FOMO takes over, the self-reinforcing cycle begins and the self-fulfilling prophecy runs its course.

All this reminds me of another famous quote, this time from the great Sir Tony Hoare from his seminal Turing award speed “The Emperor’s Old Clothes” 3:

“Almost anything in software can be implemented, sold, and even used given enough determination. There is nothing a mere scientist can say that will stand against the flood of a hundred million dollars.”

And even if we take the inflation into account since the days, Sir Tony Hoare made this speech, we talk about a lot more money than “just” a hundred million dollars.

AI considered magic

The fact that only very few people actually understand AI in general and even less people understand LLMs, agentic AI, and all that, additionally supports the puppeteers of the AI hype. As Arthur C. Clarke once phrased it as his “third law”:

“Any sufficiently advanced technology is indistinguishable from magic.”

AI is magic for most people. Very few people understand AI and even fewer people understand LLMs (to be fair: The emergent properties of LLMs are hard to understand). If you do not understand something, believe is all that is left. Critical evaluation is not possible, superstition takes over. As a consequence, most people consider AI being a wondrous, magical thing that can do anything, can solve any problem. Wishful thinking prevails over rational thought.

Maybe this lack of understanding AI is the biggest issue. People do not know, they believe. The normal public discourse about a topic that usually helps to make one’s mind up is replaced by a war of belief between the fanboys and the objectors.

Now combine that with what we discussed before. Perfect conditions for AI providers and investors. They can make any promise. They can tell any story, no matter how incredible it sounds. The potential customers cannot evaluate if it is true or not. They are completely at the mercy of the providers – and their own wishful thinking bias.

Now what?

Yeah, now what? What can we derive from all this?

We can derive that rational facts hardly matter if we want to rate the probability of the future Steve Yegge projected. It has little to do with what we need. The greater good is not part of the equation. Rather, FOMO and short-term incentives are. In the end, it is just about money and not being liable if things go south – combined with a technology, hardly anyone understands.

Sounds cynical? Well, it is not meant to be. It is just the status quo how business decisions are made most of the time. If we put aside all the nice words, most managers tend to make to explain their decisions, we realize that their decisions are primarily driven by risk and career considerations.

Again, I do not say that everyone is like that. I also know several managers who put the goals of the company before their career ambitions. But even those people would not lightheadedly risk their jobs and careers by ignoring the “AI will solve all your problems” messages, especially if some higher-level managers are already pushing for it. As written before, they also have to adhere to the unwritten rules of the game.

To be completely clear: All this is not about something being “good” or “bad”. These are just observations I made over my years in IT which I tried to present as emotion-free as possible. 4

Based on all that, at least I come to the conclusion that Steve‘s projection is a possible future I cannot afford to ignore even if it is not what we need.

Second interlude

In this post, we looked at the want side regarding Steve’s projection and which forces they trigger. We looked at the drivers of IT decision makers and of AI providers and investors. What we saw, may have felt a bit more gloomy and controversial than what you are used to from my other posts where I only lightly touch this harder to accept side of our business. But as I already wrote in the previous post: Rose-colored glasses do not help if we try to realistically ponder market scenarios and their likelihoods.

In the next post (link will follow), we will complete our analysis by looking at the likely short- and mid-term consequences of such a future becoming reality, including a short peek at unresolved side effects that would come with such a future. Stay tuned …


  1. For a detailed discussion of the origins and variants of this quote see, e.g., Quote Investigator↩︎

  2. I will still continue to discuss what I think we need because I think it is important to make these things transparent and spread the message. But even if I may convince some people to do more useful things within the constraints of their respective environments, I do not cherish the illusion, I could change the way our industry acts. ↩︎

  3. This Turing award speech contains some more timeless quotes, like, e.g.: “I conclude that there are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult.”. But the speech is not only recommendable for its quotes. It is a brilliant piece of self-reflection, advice and wisdom, we rarely get our hands on. Thus, I highly recommend to read “The Emperor’s Old Clothes”↩︎

  4. Well, yes, I was not completely emotion-free. There are some people I really despise. Nevertheless, my general mood regarding the market forces and behavior is somewhere between neutral and a bit sad – a bit sad because I see what would be possible if we would make things just a bit differently, a bit smarter, a bit less egocentric. ↩︎