ChatGPT already knows - Part 3

What humans and AI solutions are (not) good at

Uwe Friedrichsen

13 minute read

A seagull standing on a post

ChatGPT already knows - Part 3

In the previous post, we discussed where we came from as an industry, where we currently are and what the job of a software engineer (should) comprise. We also saw that most software engineers only fulfill a small part of what the role actually comprises, leading to direct competition with modern AI solutions that most likely we will not win.

In this post, we will discuss what humans and AI solutions are (not) good at to get a better understanding how we can position ourselves in a more valuable way without competing with AI solutions.

What AI tools are good at

If we ask ourselves what AI tools will be good at (and already are to a certain extent), we see several areas in which they excel:

  • Create well-defined code fragments (“well-defined” meaning a sufficiently large body of knowledge exists how to solve the given coding task)
  • Create complete solutions for routine tasks (again requiring a sufficiently large body of knowledge how to create the given solution)
  • Improve existing code

Examples for the first area may be a task like “I need to access API X using Java” or “I need to access my database and read Y from it using Python”. Depending on the task, it sometimes can be tricky to get the code right. But then again, it is just a matter of the required detail knowledge to get it right.

An example for the second area could be a task like “Generate a web site for Z” which is often used to conjure the limitless power of modern AI solutions. But to be frank: Generating a web site is a solved problem. Even generating an e-commerce site is a solved problem in times of Shopify and Co. It always surprises me that we still create one e-commerce site after the other from scratch using relatively low-level tools like 3GL programming languages and generic libraries and frameworks for the task (let alone normal web sites). But that could be a topic for another blog post.

The third area is closely related to the first area. If you present an existing piece of code to an AI solution, it can use its whole detail knowledge base to find improvements for the existing solution.

What AI tools are not good at

When looking what AI tools are not good at, we can observe that up to now AI tools show little evidence they could handle the following areas at near-human level:

  • Create solutions for novel situations
  • Discuss, moderate and balance often contradicting needs and demands with multiple stakeholder parties
  • Consider alternative solutions for a given set of (often contradicting) needs and demands, evaluate their trade-offs and drive the decision process towards an actual solution
  • Maintain the overview over highly complex technology and system landscapes
  • Design and implement code in a way that it embeds into highly complex solution landscapes fulfilling all functional and non-functional needs
  • Design and implement code in a way that is aligned with the explicit and implicit rules and principles of the encompassing solution
  • Maintain and evolve code over a long period of time, ideally creating code that supports its long-term evolution in the first place

The first point is what modern AI tools typically are not good at by design. Simply put, basically all of the popular tools are trained on a large corpus of knowledge and reproduce what has the highest probability to match the question asked (“question” in a general sense). While doing that, they are sort of able to mix bits and piece from multiple facts they learned.

This often provides surprisingly good results – as long as the tools can refer to facts from their training corpus. If the corpus does not contain any facts regarding the question asked, the tools typically start to “guess”. They mix and match parts of the multiple facts they learned again. But as there are not any facts referring to the question, the answer with the highest probability to match the question usually is not useful at best. 1

Therefore, quite some people claim modern AI solutions will only leave the area of creating completely novel software solutions to humans. But to be frank: this is an extremely small amount of all software development. Most software development is about combining bits and pieces of known solution approaches. Therefore, we should leave this as a side note and focus on the other points listed as they hopefully are more interesting in the given context.

Except for the first bullet point, everything else is about when things become really complex and ambiguous in software development: Ambiguous requirements, unclear and often contradicting needs and priorities, highly complex solution landscapes, many hidden assumptions in the problem and solution domain, combined with the need not just to write some piece of code once but also to maintain and evolve it over a long period of time.

Highly complex and ambiguous settings and long-lived IT systems are the norm in enterprise software development.

In such environments, the first big challenge is to understand what needs to be done. Different stakeholders have different needs and demands. The needs and demands are incomplete. They are ambiguous. Often, they contradict each other. Unspoken expectations are implied. Current demands are not consistent with past or future demands. Etcetera.

Even if some business analysts, requirement engineers, product owners or alike sit between the originators of the needs and demands and software development, the “pre-processed” requirements that need to be turned to code still tend to be ambiguous, incomplete, inconsistent, contradicting, and so on.

Human needs and demands can be vague and ambiguous without any apparent problems. Code needs to be clear and precise. Otherwise, it does not work. Hence, before being able to design a solution (let alone to write code), the requirements “mess” needs to be turned into something clear and unambiguous enough that it can be transformed into a software solution. We need to close the gap.

Next, a viable solution option needs to be found. There is never just one option, there are always many options, each having different pros and cons regarding the (often contradicting) needs and demands. Hence, different options need to be evaluated, their trade-offs need to be discussed with the different stakeholder groups, agreement must be achieved and decisions must be made. This includes understanding the existing, highly complex system landscape comprising lots of technologies, architectures, designs, programming languages, COTS components, integration styles, and so on.

After a decision is made, the solution needs to be designed in a way it embeds nicely into the existing (highly complex) system landscape. It needs to satisfy all the functional and non-functional needs while minimizing the overall added complexity. This also means, the design and code need to be aligned with all the explicit and implicit rules, guidelines, principles, patterns and idioms of the encompassing solution(s). Well, at least it should be that way.

All these activities, including writing the code itself, are not lined up nicely like in a waterfall approach. Instead, we usually must jump forth and back between the activities all the time, gradually converging towards a working solution while continuously collaborating in complex patterns with the other stakeholder parties.

Finally, the code written needs to be maintained and evolved over a long period of time. It needs to be read and understood multiple times by many different people – much more often than it needs to be (re-)written. It needs to be modified. It needs to be extended. And so on. A key property of software that distinguishes it from most physical goods is that software needs to be continually changed over its whole lifetime to preserve its value. 2

This means, people will need to come back multiple times to (almost) any piece of code being written and they will need to understand the code before they can modify it to implement future requirements. Or they just need to understand what the code does to solve an issue in a different code part (e.g., because the other code part calls this code part and from the existing documentation it does not become clear what this code part does).

Hence, the actual challenge is not just to write the code but to write it in a way that anyone who will need to understand and modify it in the future will be able to do so easily. This also includes pondering the most likely future changes based on the dynamics of the business domain to find fitting modularization and abstractions at multiple levels.

All these tasks are part of software engineering work. They require very different skills than knowing how to write correct code for a clearly specified task. If we look at the capabilities of modern AI tools, as impressive as they may be, they show little evidence yet they could handle these tasks at near-human level.

There are certainly other areas, I forgot in this short list where modern AI solutions also do not excel. But a recurring pattern should have become visible: At least now, the modern AI solutions do not excel in areas that go beyond doing a well-defined task without considering its consequences outside its local context or in the future. They do not take context and time into account, at least not in a way that is needed very often in contemporary software development 3.

And they are not good at dealing with all the often irrational, chaotic, personal interest driven human behavior, be it about demands, finding agreements and other types of interaction – which brings us to the strengths and weaknesses of humans.

What humans are (not) good at

If we look at what humans are good at, we can observe they are good at what AI tools are not good at. Admittedly, most people do not like ambiguity, uncertainty and complexity. We prefer clear settings, easy to follow rules and certainty about what comes next – and this is okay. This is a risk minimization preference that helped us to survive as a species.

Nevertheless, we are able to navigate terrains that are not nicely laid out, clear and obvious. We are able to navigate under ambiguity, uncertainty and complexity and discover our way towards a solution that suits our needs while taking the external forces into account. This is another important human trait that helped us to survive as a species.

We just often forget this important capability because we were trained for many years to function like a cog in a machine: Do repetitive work without making any mistakes. It starts at school, is reiterated over the course of our studies and solidified through our professional careers: Perform. Always. Do not make any mistakes. Only highly productive workers with a flawless track record (i.e., without being caught making a mistake) have a chance to land a good job in today’s job market. At least, this is what we are told all the time and we eventually internalize.

Even if we like to consider ourselves “creative knowledge workers”, if we are honest most of our work is repetitively doing quite similar work over and over again. The details may vary, but the tasks stay the same. Repetitive work. Unfortunately, this is exactly what humans are not good at. We make mistakes when doing the same stuff over and over again. Interestingly, this is another human trait that helped us to survive as a species.

We are also not good at memorizing huge amounts of information over a long period of time. If we do not use, i.e., recall the information regularly, we tend to forget it over time. Some core concepts will remain but the details will gradually fade away. This is by the way another important trait of our brain. If our brains were not able to forget things, we would not exist anymore as a species. 4

Summary

In this post, we discussed what humans and AI solutions are (not) good at to get a better understanding how we can position ourselves in a more valuable way without competing with AI solutions. We have learned the strengths and weaknesses of humans and modern AI solutions actually complement each other.

Still, we often decide not to leverage our strengths as humans and instead put ourselves in places where we directly compete with the strengths of AI solutions. One reasons is a primeval human risk minimization trait.

But there are more reasons. In the next post, we will examine how the omnipresent hyper-specialization in IT pushes software engineers into areas of human weaknesses and how nerd culture (involuntarily) reinforces the push. Stay tuned …


  1. Another option would be not to provide an answer if the probability to provide a good answer is too low. But usually, the AI tools are built in a way that does not allow this option – either because the underlying AI algorithm of the tool cannot recognize such a situation or because the tool designers did not include an “I do not know” answer option, i.e., the tool always must provide an answer. ↩︎

  2. You would never change a sports car into a family car into a pick-up into a truck into a cargo air craft into a cargo ship over time because of changing needs and demands. You would sell the old vehicle and buy a new one that was explicitly designed and built for the changed need. In software, this is different. We continuously modify the existing software to adapt to our changing needs and demands. This does not only mean that the software equivalent of a race car might turn into a cargo ship over time. It also means that we often see mixtures of requirements implemented in software, we would never see in the physical world. E.g., in software we see cargo ships powered by jet engines connected to wings attached to the ship. The steering wheel and transmission are still from the sports car and for some unclear reasons you still need to be able to attach child safety seats all across the cargo holds. While this may sound ridiculous at first glance, this is still a very moderate illustration of how software typically looks a few years after its initial release. Usually, it is much more chaotic and a lot more complex. The only reason, most people are not aware of it is the fact that software is invisible. The analogy with something from the physical world (despite all its limitations) makes a bit “tangible” how weird the things are we do with software all the time. But the lack of a physical reference system leaves most people without any clue what they are asking for or what they are building. See the classic papers “Programs, life cycles, and laws of software evolution” by Meir M. Lehman, published in the Proceedings of the IEEE, Vol. 68, No. 9, September 1980 and “No silver bullet” by Frederick P. Brooks, Jr. (can be found, e.g., in Brooks’ famous book “The mythical man-month”) for a more detailed discussion of the issue. ↩︎

  3. I know that “here and now” is a typical awareness boundary of most software development projects: “Make it work here and now. Everything else, left and right, before and after, is not our concern.”. And while I also know that this extremely harmful thinking is still very widespread in most companies, I do not want to discuss it in more detail here but leave it for a future post. For now, just let me state we know for many years meanwhile that project thinking in software development creates much more harm than (potential) benefit. It is a relict of last century’s efficiency thinking and it is about time we leave it behind as an industry. ↩︎

  4. I can recommend to read some literature about how our brain works. It is very interesting to see, how some of the seeming weaknesses of the way our brain works turn out to be essential for our survival. ↩︎