Software - It's not what you think it is - Part 6

Making sense of it all regarding AI solutions

Uwe Friedrichsen

17 minute read

Flock of penguins on a rocky landscape

Software - It’s not what you think it is - Part 6

In the previous post we have discussed that software is invisible which deprives humans from an essential reasoning instrument. We have also looked at the malleability curse, the property of software that it can be bent and twisted in totally absurd and nonsensical ways while still working in some way.

In this post, we will summarize the findings from all previous posts of this series and discuss what it means for us and how we could improve the situation – with a focus on AI solutions in this post and the humans involved in the next and final post (link will follow).

The story so far

In the first post of this series we discussed the assembly line fallacy, the misconception that software development is the same as, e.g., building a car. We have seen that building software is so efficient that most people have forgotten it exists. Our product is running software, not source code. We build our products using a compiler and linker, or more recently a CI/CD pipeline. Writing code on the other hand is part of the design process. The design is not complete until the last line of code has been written.

In the second post, we discussed the broken abstraction dilemma and its consequences. We have seen that if we want to be able to describe our demands in a short concise way, we need some kind of implicit or explicit high-level abstraction that enables us to specify things in a compact way. As soon as we break the abstraction, we not only work against the abstraction to get the non-fitting demands implemented. Usually, it requires one or two orders of magnitude more specification details and results in brittle solutions that are hard to maintain and change.

We also saw that most of the time, we are either confronted with incomplete abstractions or with requirements not accepting the limits of the abstractions, both leading to the aforementioned situation, i.e., we need to bypass the existing abstraction and implement parts of the solution at a lower abstraction level – with all its consequences.

In the third post, we discussed the greenfield fallacy. We saw the actual challenge is not to create some code for a given requirement. The challenge is to integrate it into an existing highly complex web of interacting, often conflicting demands, expressed as code – which in practice is the main job of a software engineer.

In the fourth post, we discussed the value preservation dilemma of software. We have seen that software – opposed to almost all physical goods – needs to be changed and adapted to the ever-changing needs and demands of its environment to preserve its value. We have also seen that all comparisons with physical goods regarding “maintenance” do not work.

Finally, in the fifth post, we discussed the invisibility dilemma and the malleability curse. We saw that software is invisible which deprives humans from an essential reasoning instrument. The malleability curse is the property of software that it can be bent and twisted in totally absurd and nonsensical ways while still working somehow. Both misconceptions in combination often lead to poorly designed and written software without anybody noticing it.

Making sense of it all

What can we learn from all this? We already discussed the effects on AI writing software briefly in the respective posts. However, it is also interesting what effects all these misconceptions tend to have on humans and what we can do about it.

Well, as we already started with it in the prior posts, let us first conclude what the misconceptions mean for AI solutions writing software and then move on to the human side in the next and final post of this series (link will follow).

Writing software is not a shop floor activity

The assembly line fallacy does not directly influence the ability of AI solutions to write software. The problem is that people (often decision makers) confuse software development with shop floor construction and therefore think AI solutions can take over software development the same way robots took over shop floor activities. However, we have seen that software development is design and not assembly. Thus, all narratives of AI solutions taking over software development the same way robots took over shop floors are simply misdirected.

It does not mean that AI solutions will not have any effect on software developers. Most likely they will have quite some impact on the work of white collar workers, including software developers. But the narrative is different. The trivial shop floor comparison does not work. We still have to learn how the actual AI narrative will look like, no matter what the omnipresent market criers of the current AI hype try to make us believe.

Conciseness vs. degrees of freedom

The broken abstraction dilemma on the other hand is particularly relevant regarding AI-based software development. The idea that business experts express their needs and wants in natural language and the AI will create the required software solution from those requirements works as long as the business experts accept the limitations of a high-level abstraction.

If you want to express your needs and wants with just a few sentences of natural language, a quite high-level abstraction is needed which defines the details of the software building blocks the solution is built from. To make this sentence a bit easier to grasp: It is definitely nice (and often also desirable) if you only need to articulate a few sentences to describe the solution you want. The tradeoff is that all implementation details of the solution building blocks are encapsulated and predefined by the high-level abstraction. This means, a high-level abstraction provides a lot less degrees of freedom than a 3GL, a regular low-level programming language.

You cannot tweak this web page to look this way and that other web page to look in a different way. You cannot tweak the underlying algorithm to work differently here and there, just as you like it. For each building block of the high-level abstraction exists exactly one way of implementing things which is predefined (and defines how the web page will look like). This is what makes it possible to describe a solution with just a few sentences.

Some people now tend to object we could solve that issue with adequate configuration options or alike. However, if you think about it for a moment you realize this do not help because the additional options urge the business experts to describe the desired configuration which increases the number of sentences needed to describe the desired solution significantly. In the end, configuration options are nothing but a kind of “programming at runtime” which moves you down to a lower abstraction level. 1

We have seen it many times with approaches like 4GL, MDA, No Code/Low Code or simply customizable business software like, e.g., SAP, that the business experts do not accept the predefined implementation details of the abstractions provided. It does not matter if this is because they actually need the extra degrees of freedom to get their job done or if they want them for less relevant reasons. In the end, the result is the same: The non-acceptance of the predefined implementation details forces developers to break the given abstraction, multiplying the efforts needed to implement and maintain a solution (and to work against the constraints implied by the respective tool that are needed to implement the higher-level abstraction which makes the work extra-tedious).

The same is true if AI solutions are used. Admittedly, they can nicely support to bridge the gap between natural language and a formal description. However, they implicitly also work with abstractions to turn natural language descriptions into executable code. The more concise the natural language description is expected to be, the higher the abstraction level needs to be – which in turn means the less degrees of freedom are available for the solution implementation. You cannot have both. That does not have anything to do with AI. It is a much more fundamental problem.

You either want all degrees of freedom. This means you need to specify things at a very low level which means extremely long solution descriptions. In the end, the source code, often consisting of millions of lines of code, is nothing but the solution description in a 3GL representing the union of all requirements ever articulated.

In other words: Wanting all degrees of freedom requires you to describe the whole source code at the same level of detail – just in natural language. And to be very clear: This is a road nobody wants to take. It is hard enough to describe the desired behavior of a system with a 3GL, i.e., using some kind of a formal language. But trying to achieve the same using something imprecise, ambiguous and often inconsistent as natural language will most definitely turn into a nightmare.

This leaves the other option: You want a concise solution description. But this means giving up many degrees of freedom and accept that a solution building block of the abstraction used always looks and works the same.

But what about going with the default abstraction most of the time and only moving down to a lower level if really needed? This would give us the best of both worlds, wouldn’t it.

Leaving aside the problem to create an AI solution that is capable of implementing multiple abstraction levels and unerringly switching between them whenever needed: Enterprise reality taught us that things do not work this way.

If this would be the way how business experts would act, SAP implementations would not be those behemoths that were customized beyond recognition and lost their upgrade capability many years ago in most companies. And low code solutions would be the norm in software development because usually they exactly provide such options.

But for some reason, the business experts almost always require all the customization options that are only available with a 3GL. This is not a judgement. It is just an observation from more than 30 years in that business and I see little evidence that this is going to change.

Thus, my cautious prediction is that AI solutions will suffer the same fate as 4GL, MDA, No code/Low code and alike because business experts (as well as the responsible decision makers) expect to get both: Concise requirements descriptions in a few sentences while having all degrees of freedom of a 3GL at hand – which cannot work by definition.

Writing new code is easy

The greenfield fallacy means the actual challenge is not to create some code for a given requirement but to integrate it into an existing highly complex web of interacting, often conflicting demands, expressed as code. This means, an AI solution would need to understand arbitrary complex code first to then figure out where and how to modify it and add new code to implement the desired functionality without breaking the behavior of the existing code. This task is orders of magnitude harder than simply create some code in the void or take a few lines of existing code and see if they can be improved.

As a consequence, the magic AI software developer that takes a few lines of natural language and then exactly knows how to integrate it into the millions lines of code of the existing system landscape, satisfying all implicit assumptions and expectations of the business expert without breaking anything is nowhere to be seen yet. Supporting software developers and making them more productive? Yes, sure. Taking over their jobs? Well, no – at least not in the near future.

The value preservation dilemma reinforces the aforementioned challenge. It makes clear that changing an existing codebase is the predominant task, not creating new code. The brownfield prevails, not the greenfield.

We could still attempt to let an AI create the greenfield solutions and then hand them over to human software developers for the subsequent brownfield activities. Unfortunately, we do not have any evidence that the code created by AI solutions would be understandable, maintainable and evolvable. Even if a few lines of code generated by an AI solution appear to have these properties, creating a solution that consists of hundred thousands or millions lines of code and can be maintained and evolved is a very different story.

This requires understanding all the requirements, how they influence each other, how to organize them best to create a nicely modularized codebase at several module abstraction levels that all need to exhibit “high cohesion, low coupling” properties, and much more. This is a very challenging task and up to now there is no evidence that existing AI solutions have the faintest glimpse how to create such systems. 2

And it would be a bold bet to create a system that way just to possibly learn we can throw it away as soon as the first change needs to be implemented – which typically would be immediately after its initial release.

Writing the whole system again and again

But what about creating the whole system from scratch every time something needs to be changed? After all, the AI solution creates code at a breathtaking speed. Well, this would mean you need to keep your whole conversation with the AI solution and understand how to modify it to achieve the desired result. Just to be clear: Just saying “Additionally do XYZ but everything else needs to work as before” will not work, even if requirements are still often phrased in this nonsensical way if human software developers need to implement it. 3

In the end, the business expert would need to learn how to describe the whole system in an unambiguous way using natural language. As natural language is always ambiguous and inconsistent to a certain degree, this approach most likely is doomed to fail. Either the expert will need to add clarifications all along the way or find a way to reduce the natural language to a more formalized, unambiguous language. Well, we tend to call such languages “programming languages” and such approaches are covered by 3GL, 4GL, MDA, DSLs, Low code/No code environments, etc. for a long time already.

Additionally, the AI solution would need to be able to digest the whole natural language specification at once which can be arbitrarily long. Even the most powerful AI solutions, wasting unbelievable amounts of energy for their training and also in production compared to existing non-AI solutions, have very strict limits regarding the amount of information they can take into account at once.

Problematic discussions

The invisibility dilemma and the malleability curse reinforce the problems. As software is extremely malleable, an AI solution could create anything from perfectly sensible to utter nonsense and just from looking at the code it is often hard to tell the difference. The fact that software deprives us from essential reasoning tools because of its invisibility does not make things easier.

Especially the fact that people who do not write code do not have any means to assess how much sense the code makes, makes the discussions hard. They cannot judge the quality of a codebase, especially regarding its suitability to be modified and evolved over time. They cannot tell a solid codebase apart from the figurative skyscraper standing on the tip of a needle. They just feel some sort of (understandable) pain because “software development takes so long”.

However, in 99%+ of the time this is not the fault of the software developers but is due to many other deficiencies in the software development process (see, e.g., my blog series “Simplify!” which discusses many drivers of long and wasteful software development cycles and how to overcome them).

The “It takes so long”-pain makes people who do not write code themselves very vulnerable to all claims that promise “dramatically accelerated software development”. As decision makers typically belong to this group, there is a significant risk they jump for any solution that promises to accelerate software development without being able to rate its sensibility.

Sixth interlude

If we combine all these observations, we can state the “AI software developer” replacing human software developers is nowhere to be seen anytime soon. In the end, writing the code itself is not the problem. Everything else is. And introducing an AI solution does not change all these other things that make software development hard and often slow. Those problems persist.

They could be solved – at least to a certain degree – but that does not have to do anything with AI. They rather have to do with human habits, with suboptimal responses to misunderstood problems, often created by those who demand AI-based software development the loudest (creating another suboptimal response to a misunderstood problem).

Of course, we can and should use AI solutions to support (human) software developers, to make them more productive, more “efficient”. AI solutions can especially take over relative repetitive routine work, we still see in many software projects. Those situations where basically the same thing needs to be done over and over again but the deviations between the implementations are a bit too subtle to extract them into a subroutine or a library. Or when you need to write the code to access that library over and over again. And so on.

Even higher-level code building blocks are possible. All this helps to make software developers more productive and leave them more time to do the hard parts of their work: Understanding the existing system and figuring out where to put what in which way to implement the new requirement without breaking existing behavior – ideally in a way that fosters modifiability and evolvability.

And even more important: It could help developers to improve their effectiveness. While most people still worship efficiency, it is not our main problem anymore. Our main problem is the lack of effectiveness. (Not only) in software development, very often we do the wrong things in a highly efficient manner (for an explanation and the details behind that observation, please see my blog post “Forget efficiency”).

AI solutions can only improve efficiency. They cannot improve effectiveness. Doing the right thing requires observing, reasoning, experimentation, continuous learning and a lot more. You need to be aware of your environment and you need to be able to come up with original solutions. As long as we do not have an actual AGI, this is still the realm of humans.

Therefore, I think it would be great if AI solutions would support the required efficiency, creating the room for humans to improve effectiveness. This could be a win-win situation for everyone. But maybe I am just too idealistic … we will see.

In the next and final post of this series (link will follow), we will shift our point of view and ponder what all these misconceptions mean for the humans involved and how we could improve their situation. Stay tuned …


  1. The whole concept of configuration options stems from the fact that changing software often takes too long. As a business department person you may need to change the behavior of an application within a few hours. However, the typical software change process seen in most companies would require several weeks, months or even years to implement the change needed. This is not due to a software developer being slow. This is due to the software development processes in place. This creates a non-acceptable situation for the business department person: They must be able to change the related software behavior in a timely manner whenever they need it. Hence, they require to implement some code that enables them to change the software behavior at runtime. If the software development process would allow business department persons to require such often simple software behavior changes and get them within minutes or a few hours, most of the exuberant configuration options of typical enterprise software solutions would not exist. ↩︎

  2. To be fair: We humans often also fail to accomplish this task to a satisfactory quality. ↩︎

  3. This kind of requirement never works, also with human software developers because the second half of the sentence unfolds into an acceptance criterion that consists of thousands and millions of often implicit assumptions that no-one can oversee, including the person who created the requirement. It would only be possible to check if such a requirement is implemented correctly if a 100% test coverage of the software exists – not only testing all existing branches in the code at least once but also testing all possible permutations of all existing code branches. As this level of test coverage is prohibitively expensive in almost all enterprise software contexts, such requirements are not actual requirements but rather nonsense – at least the second part. ↩︎