ChatGPT already knows - Part 1

Disrupting the role of the software engineer

Uwe Friedrichsen

13 minute read

A green snake curled up on a branch

ChatGPT already knows - Part 1

At the moment, we see a lot of discussions revolving ChatGPT and other modern AI tools like, e.g., GitHub Copilot. Many managers praise them as the new silver bullet to beat the (often self-made) skills shortage that will make software developers redundant while driving software development efficiency to unprecedented heights.

Most software engineers on the other hand do not get tired to assure each other that these tools are nothing but another productivity aid which is completely useless without the guidance of an experienced software engineer. Hence, basically nothing will change for them.

E.g., I just attended a software development conference and one of their keynotes discussed the impact of modern AI tools on the role of the software developer. The keynote speaker showed a few examples using some AI tools to support him writing code. After the demo, he came to the conclusion that the tools were nice but quite useless without him.

Unfortunately, he works in a very specialized domain and the task he chose as an example did not have a lot in common with typical enterprise software development tasks. Therefore, at least I was not surprised regarding the limited support the AI tools were able to provide. Still, this keynote was quite representative for the typical current reasoning inside the software engineering community.

So, everything is fine? We can continue doing things as we always did? Other domains may be threatened by the abilities of modern AI tools but we are not? Let the managers dream their dreams of their new silver bullet. They will wake up soon enough learning they still need us.

Or are we the ones who are dreaming and need to wake up urgently?

Personally, I think the most likely future lies somewhere in the middle. Modern AI tools are not a silver bullet. But I think they will change a lot more than most software engineers are willing to admit.

As this post would have become way too long for a single blog post, I decided to split it up in several posts:

  • This first post discusses why modern AI tools will become more disruptive for software engineers than most of us want to admit.
  • The second post summarizes where we came from as an industry, where we currently are and what the job of a software engineer (should) comprise.
  • The third post analyzes what humans and AI solutions are (not) good in.
  • The fourth post discusses how hyper-specialization emphasizes our human weaknesses, how nerd culture (involuntarily) does the same and lists a few ideas how to leverage our human strengths better.
  • The fifth post dives deeper into the first part of the aforementioned ideas, becoming a “full-range engineer”.
  • The sixth post reveals why we need to be a bit rebellious if we want to preserve our value as software engineers and looks at the path ahead.
  • The seventh and final post argues that we need more than just a changed self-conception of software engineers for our future well-being (going way beyond the boundaries of IT) and sums up the whole blog series.

With this blog series outline, let us dive deeper into the new challenges, modern AI solutions pose to software engineers.

Detail obsession

Let me start my reasonings with an observation. When creating presentations, I always need to decide if I rather want to go broad or deep. The topic usually is too big to go both broad and deep within an hour or less. Often, I decide to go broad. I put the topic in a context, try to explain the Why behind the topic, how it is organized and so on. I also try to go deep in a place or two, primarily to give an impression how the topic feels at work.

I also do this because this is the type of presentation I prefer myself if I want to learn about a new topic. I want to understand the overarching structure of a topic, how it fits in the bigger picture, how things interact, its pros, its cons and so on. This gives me a frame I can use to fill in the details when diving deeper into the topic later on my own.

I do not like so much having to start with a lot of details without any encompassing context. Usually, that is what I need to do if I try to learn a topic from its documentation: lots of details, and I need to painstakingly derive the overarching structure, the interaction patterns, the tradeoffs, etc. from the details.

Hence, I usually attend sessions about topics that are new to me to relieve me from figuring out the whole bigger picture on my own. My expectation is the presenter already did the work and provides this framing information. Unfortunately, many presentations I have attended only focus on details – no bigger picture, no context, no overarching structure, no when to use and when to avoid, only details.

The interesting observation I made is twofold:

  • Many of the presentations that only deliver lots of details about a topic attract a big audience. Often, these are the best-visited sessions at developer conferences.
  • Quite some self-proclaimed “hardcore” software engineers gave me the feedback they find my presentations too “superficial”. They expected more “details”: more code, more nifty details, more cool hacks, more diving into the darkest details of the topic – ideally combined with a lot of live coding.

This always left me a bit confused because as written before I have a different expectation regarding a good presentation about an IT topic.

But then again, our whole IT industry is so much into hyper-specialization that I should not be surprised about the unbridled desire of software engineers for more and more details – the more complicated they appear, the better. Software engineers are incentivized for “knowing more and more about less and less until they know everything about nothing” 1.

The more you are an expert in a narrow area of expertise, they higher the chances you can land a well-paid job in an enterprise IT department. The more complicated details you know, the person next to you does not know, the better your standing in the company.

Hence, it should not be surprising that many software engineers focus on knowing lots of nifty details in a small area of expertise. This gives them a competitive advantage.

Until now …

Enter ChatGPT and its friends

Recently, a new generation of AI tools entered the stage. The most popular among them is ChatGPT, probably followed by GitHub’s Copilot. While ChatGPT is more of a general purpose AI-based tool that can also support software development, GitHub Copilot is focused on supporting software developers. Besides these two tools, there are many others and currently it feels as if every day another dozen gets announced. The commonality of all the tools is they are based on modern AI approaches and deliver impressive results – way better than any other tools did in the past. 2

Many people point out they do not always deliver correct results which is true. From all I have seen so far, this error rate is the core argument of the people who claim, modern AI tools will not affect the software engineer role. These tools would be useless without the guidance of a human who understands the task at hand.

Personally, I would not bet on it. On the one hand, humans also make errors very often. The figure of speech “To err is human” does not come out of thin air. On the other hand, I think this reasoning is too much stuck in the here and now. Think back 3 years. 2020. Would you have thought, there would be an AI tool with the capabilities of a ChatGPT? Most likely not. Now think 3 years into the future. 2026. Do you think, ChatGPT and all its friends will not have advanced their capabilities until then?

Most likely, the error rate of AI tools dealing with regular software development tasks will have gone down to super-human levels, i.e., the AI tools most likely will make less errors than most software engineers make. Take, e.g., ChatGPT. It is a general-purpose AI tool which more or less accidentally also is able to write code. But it was never designed to specifically support software development. This ability was just a byproduct of the training.

Now think what is going to happen if you train such a tool on software development only. We currently see lots of companies and software engineers fine-tuning generic LLMs (Large language models) for specific tasks using techniques like RLHF (Reinforcement learning from human feedback), leading to a much higher accuracy compared to the generic models. There is also a lot of research how to reduce the error rates of those models. Finally, LLMs and fine-tuning techniques reached the OSS community which will create even more momentum. 3

Detail knowledge is not a differentiator anymore

Hence, I think it can be expected that the current error rates of AI tools will drop quite fast and and reach super-human levels over the course of the next years. This leads to a consequence that will be more disrupting regarding the role of software engineers than most people in our community want to admit.

Detail knowledge, the big differentiator of today’s hyper-specialized job market, will lose its value.

  • You just learned how to use that new framework? ChatGPT already knows.
  • You just learned how to optimize that complicated code section? ChatGPT already knows.
  • You just learned how to develop around the garbage collector? ChatGPT already knows.
  • You just learned how to use the nifty details of that popular tool? ChatGPT already knows.
  • And so on …

I used “ChatGPT” as a placeholder for the current and upcoming AI tools. The point is that detail knowledge, the competitive advantage of many software developers since the dawn of IT loses its value. Any detail knowledge you can learn can also be learned by modern AI tools. Actually, usually those tools already have learned it if the knowledge is accessible via the Internet – which is true for most knowledge.

I mean, let us be frank: Most of the time, we expect knowledge to be available for free via the Internet, be it training videos on Youtube, be it conference talks about virtually anything, be it technical blog posts, be it sites like Stack Overflow, be it the documentation of the tools themselves. If the knowledge and the respective technology are not available for free (at least the “community edition”), the community usually ignores the technology which means it does not become popular in the context of enterprise software development. And if the knowledge is freely available via the Internet, modern AI tools can access it, too.

In the past, the main challenge was that not everybody could learn everything. The sheer amount of knowledge forced you to specialize in some way. And if you knew enough about something that was in demand and you were able to deliver it, it gave you a good livelihood. And if you knew more about a topic of high demand than others, it made you a sought-for software developer. This defined the career paths of many software engineers who did not strive for a career in project management, line management or alike but wanted to stay in software engineering.

With modern AI tools that can learn virtually unlimited amounts of software engineering relevant knowledge and apply it in surprisingly smart ways at a continuously dropping error rate, this basis for a safe career path in software engineering starts to vanish. Even if the AI tools are not yet as good and reliable at knowing and applying arbitrary detail knowledge as a seasoned software developer, it only is a matter of time until they are.

Will software engineers become obsolete?

The pressing question thus is:

Are software engineers doomed? Will we all be replaced by modern AI tools?

Will the wet dreams of many managers of a world without pesky software engineers will come true? Many managers hope for a long time meanwhile to replace software engineers with the next “silver bullet” technology. Or at least they hope that some technology helps them to tackle their skills shortage, their inability to attract new software engineers while continuously losing their old ones.

These dreams are many years old. We have seen them with 4GL. We have seen them with MDA. We have seen them with Rapid Prototyping tools. We have have seen them with several other technologies. Currently, we can see them with Low code/No code tools. And the most recent instantiation of this generation-old management dream is “AI”.

Personally, I do not think that questionable dream will come true. But I think that the modern AI tools will affect us differently than previous waves of so-called productivity tools, that they will profoundly disrupt the role of a software engineer over the course of the next years.

In the past, some detail knowledge became irrelevant due to technical advancements and better productivity tools. But it was always replaced by new detail knowledge relating to new technologies that could be used as a new differentiator. The past career paths were shaped by the limited amount of knowledge, a single person could apprehend. This limitation combined with the ever-growing hyper-specialization defined the career models of software engineers. The more you knew about a currently fashionable technology, the better were your career chances.

But this time, it is different. The limitation of what a person can apprehend is gone. Modern AI tools can learn virtually an unlimited amount of detail knowledge. As a consequence, there is no new detail knowledge with high value that fuels your career path. As soon as you can learn something about a new technology – usually for free via the Internet – modern AI tools can do it, too. And they learn faster than you do and they can learn more than you can.

Therefore: ChatGPT already knows.

But what is left to us then?

This will be the leading question of the remaining blog posts. In the next post we will build the required basis for answering this question. We will look at where we come from as an industry, where we currently are and what the job of a software engineer (should) comprise. We will also see that most software engineers only fulfill a small part of what the role actually comprises and how that leads to direct competition with modern AI solutions. Stay tuned …


  1. I heard this statement from Nathan Myhrvold in the documentation “The creative brain” (available, e.g., on Netflix). I like this statement a lot because IMO it perfectly describes the hyper-specialization development. ↩︎

  2. If you want to get a bit better understanding regarding the capabilities of current AI solutions, you may, e.g., want to watch the talk “AI, Make It So - A Dive into the Universe of Prompt Engineering” by Patrick Debois or attend the (free) online course “ChatGPT Prompt Engineering for Developers” by Isa Fulford and Andrew Ng↩︎

  3. The fact that software engineers highly motivated work at biting the hand that feeds them and their software engineering peers by continuously improving the quality of AI tools that can write code gives room to many interesting ethical discussions. The same is true if we ponder what Naomi Klein from the Guardian wrote in a recent column: “[…] what we are witnessing is the wealthiest companies in history […] unilaterally seizing the sum total of human knowledge that exists in digital, scrapable form and walling it off inside proprietary products, many of which will take direct aim at the humans whose lifetime of labor trained the machines without giving permission or consent”. While I think such discussions are important, I will not dive into the ethical implications of the current developments here. ↩︎