ChatGPT already knows - Part 2

Past and present of software engineering

Uwe Friedrichsen

14 minute read

A flock of flamingos standing in the water

ChatGPT already knows - Part 2

In the previous post, we discussed that detail knowledge, one of the major differentiators in software engineering careers, ceases to be a differentiator due to the growing capabilities of modern AI solutions. Whatever relevant detail knowledge a software engineer can have, these tools also can have. We stopped with the question what is left to software engineers in such a changing landscape, how to preserve one’s value.

While I pondered this question, I realized that answering it is not as easy as I initially thought. Modern AI solutions involuntarily brought some of the deficiencies and madness of our industry unadorned to the surface again. Responding to the novel challenges they pose thus needs more than just a few knee-jerk reactions. It requires understanding the situation in a more holistic ways to figure out how to respond to it best.

Thus, I decided to take the required steps back first, hoping the more complete picture helps to understand my recommendations better that will follow in one of the subsequent posts.

In this post, we will first look at where we as an industry came from and then what our job is from a very general perspective. Let us start with our heritage and what it means for our current situation.

Where we come from

As written before: To understand how we can keep or even improve our value as software engineers in the future, it is useful to understand first where we come from and where we are. I wrote about it in quite some prior posts. Still, most of you will not regularly read my blog. And even if you do (thx a lot, btw!), I cannot assume you remember all this information, scattered over quite a number of posts. Hence, here is a little summary:

  • Software development started in a pre-industrial mode. Software developers were sort of crafts(wo)men. As hardware performance was very limited, IT only supported very limited parts of the business, usually a business function or smaller. IT was a pure business support function.
  • In the 1960s, the demand for software development rose massively. To satisfy the demand, software development was industrialized: Applying the industrial production ideas, software development was split up in several steps that were connected via a virtual assembly line, the software development process. The division-of-labor approach turned the crafts(wo)men into specialists that only handled a limited part of the IT value chain: business analysts, requirement engineers, architects, designers, developers, tester, and so on. This enabled a cost-efficient scaling of the software production process, focused on economics of scale.
  • Hardware became more powerful over time. Networking started to connect multiple computers. Personal computers entered the business departments. As a consequence, IT started to support whole business processes. Software development went from being a complicated task towards being a complex task (see, e.g., the Cynefin framework for a distinction between “complicated” and “complex”).
  • The original division-of-labor based software engineering approaches did not work so well anymore. However, as IT was still a business support function, the industrial software development approach was maintained, focused on minimizing costs per unit of software produced. To compensate for the problems due to complexity, more of the “proven” was added, i.e., software development processes became bigger, more controlling was added, big quality initiatives were started to reduce variance (“eliminate errors”), and so on.
  • In the late 1990s, agile software development approaches were introduced as a response to the growing inadequacy of the meanwhile bloated traditional software development approaches, explicitly accepting and embracing complexity as well as markets that turned post-industrial in most domains.
  • However, inertia was so strong in most companies that “culture ate agility for breakfast”. Almost all agile transitions ended doing the same as before (trying to minimize costs per unit of production, assuming almost perfect predictability), just disguised with a new set of “Agile” terminology. This is still the status quo for most companies.
  • With the rise of the Internet and (a bit later) smartphones, user interactions and subsequently whole business models were digitized. The ongoing digital transformation made business and IT more and more inseparable. IT no longer was a support function. Today, we can state: IT is business. Business is IT. It also means, the full unpredictability of post-industrial markets immediately hits IT. The weakest link of business and IT determines how well a company can respond to dynamically changing market needs and demands.
  • Instead of solving the partially decades-old problems of software engineering, usually new tools and technologies were introduced, hoping they would solve the problems – which they did not. IT landscapes became more and more complex. Today, most IT landscapes are on the verge of incomprehensibility and unmaintainability.
  • To address the challenges of today, i.e., complexity of business and IT landscapes, uncertainty regarding future needs, the “proven” approaches, targeted at scaling software production in non-complex environments were continuously “refined”, leading to hyper-specialization: Software engineers are expected to specialize more and more in ever-shrinking areas of expertise to address the growing complexity. The number of handovers between all these growingly minuscule areas of expertise continuously increase while the overarching understanding of business needs and IT interaction dwindles.
  • Unpredictability and uncertainty are still ignored in most companies as average lead times of multiple months or even years from a business idea to production prove. Instead, more and more “features” are expected to be implemented in shorter and shorter time to tackle the unpredictability of the market. Instead of enabling shorter and tighter feedback loops between business and customers to address post-industrial uncertainty, an ever-increasing amount of features is thrown haphazardly at the customers, hoping to accidentally meet some of their needs and demands.
  • In the middle of the IT value chain, some desperate developer teams are expected to “sprint” all the time, not to enable faster feedback loops between business and customers, but to implement more software per unit of time, driving industrial production methods to a twisted extreme (see also the previous list item). Efficiency is still king, effectiveness is ignored.
  • At the same time, software engineers are expected to pick up new tools and technologies at an ever-growing pace, most of them advertised as the next silver bullet to solve the issues of industrial production methods pointlessly applied to post-industrial markets. A huge, bloated industry meanwhile lives extremely well from this contortion: Product vendors, training and consulting companies, IT media companies, conference organizers and more, all painstakingly taking care that the situation does not change.

This was the short summary of where we came from and what (still) can be observed in most companies, being reinforced by an industry earning very well from the status quo.

Note that some companies act differently, especially many of the Internet unicorns (today usually called “Big Tech”). They optimized for economics of speed, i.e., they minimized the lead times of their IT value chain. Often, their average lead times are a few days or less – from business idea to production, immediately gathering feedback from the users to learn and start the next cycle. To achieve such short lead times, they approached IT very differently, optimizing for flow and decentralized decision authority. As a side effect, their IT departments became very effective – and also surprisingly efficient.

Nevertheless, the majority of companies (and the encompassing industry living very well from it) are still focused on economies of scale in their IT departments, leveraging industrial production methods, leading to hyper-specialization, cargo-cult agility ignoring market unpredictability and uncontrolled growth of software complexity.

This should be sufficient for the moment regarding where we came from and where we are. As written before: We need to keep that in mind when pondering how to adapt best to the new challenges posed by modern AI solutions.

What needs to be done

It is also important to remind ourselves what the job of a software engineer comprises – or should comprise. Quite often we lose track of it in all the details of our daily work. However, we need to take this a bit more holistic stance when trying to figure out how we can keep or even improve our value as software engineers in the future.

Hence, the question is: What do we typically need to do as software engineers, what kind of tasks do we need to support?

This is a very high-level description of what we typically do:

As software engineers, we need to work with software. We either need to implement some new kind of functionality or we need to modify some existing functionality. Sometimes the software does not work as expected and we need to fix the issue (the “bug”).

This never happens in isolation but within the context of a complex web of business functionalities and IT systems. Everything is connected and everything affects everything.

Typically, also a lot of – often conflicting – visible and hidden needs and expectations from several stakeholder parties need to be detected, managed and taken into account when designing and implementing the solution.

A software engineer typically

  • takes a new business need (called “requirement”, “user story” or alike),
  • tries to clarify the ambiguities and hidden expectations,
  • needs to understand how the new or changed functionality interacts with all the other highly complex existing and planned encompassing business functionality,
  • needs to figure out which other constraints affect the solution,
  • needs to understand how the usually complex and often contradicting additional explicit or implicit needs and expectations of different stakeholder parties affect the required solution,
  • then turns the given business problem into a solution,
  • embeds it into the existing highly complex system landscape,
  • makes sure all quality requirements are met
  • and finally releases the solution to production.

I deliberately left out the challenges due to company politics and inertia (often also called “culture”) in this task description. Politics and inertia add another layer of complexity and tend to make the job of a software engineer even more challenging than it already is.

In short:

Working as a software engineer basically means turning business needs and expectations into software solutions within a highly complex and often unpredictable environment on both problem and solution side.

Based in this (abstract) work description of a software engineer, two main observations can be made:

  • The word “complex” occurred quite often in this description of the work of a software engineer. This is not by accident. Today’s software engineering takes place in a highly complex environment and everyone familiar with complexity theory knows that a complex task can only be solved by a complex solution system. The predominant hyper-specialization, division-of-labor based approach does not provide a complex solution system but a complicated one at best (again: see, e.g., the Cynefin framework for a distinction between “complicated” and “complex”). Hence, hyper-specialization and its effects on the predominant career paths of software engineers are at odds with effective software engineering work.
  • Writing code is only a very small part (< 10%) of the overall work that needs to be done. A much bigger part is understanding and managing the (often only partially rational) needs and wants of the people involved, embedding them into the existing system landscape without violating existing constraints while making sure the overall system landscape stays modifiable and extensible over a long period of time. Again, hyper-specialization and its effects on the predominant career paths of software engineers are at odds with effective software engineering work.

The two observations are confirmed by looking at how the software engineer role is implemented in most companies: A highly efficient feature coding machine, continuously taking clearly defined tasks (and complaining if they are not), following clear rules, being proficient inside their small area of expertise.

This is not what software engineering is about in general. This is just a tiny subset of the overall task. Still, this is what most companies expect and reward. Remember: The more you are an expert in a narrow area of expertise, they higher the chances you can land a well-paid job in an enterprise IT department. The more complicated details you know, the person next to you does not know, the better your standing in the company.

Summing up: While software engineering in general is a highly complex and challenging task, most software engineers do not cover the whole spectrum of software engineering, but just a tiny subset, related to translating relatively clearly defined features into code, often ignoring the consequences of their work over time and/or for other systems or system parts (because they are rewarded for completing features as quickly as possible, not for pondering the consequences of their work).

With that, let us come back to modern AI solutions.

AI solutions will beat us soon in writing code

As I currently see it, anytime soon we will no longer be able to beat AI solutions when it comes to widely needed coding tasks. With “widely needed” I mean the following: As soon as something develops a certain demand, it becomes popular. If something becomes popular, people start talking and writing about it. Very often, we use these freely available knowledge sources in the Internet ourselves to learn it. As soon as enough knowledge sources exist in the Internet that we can learn it, modern AI tools can learn it, too.

As a consequence, most typical software development tasks in most companies can also be implemented by AI solutions – if not yet now, then soon. Because let us be frank: Most implementation tasks are not rocket science. They basically require combining several usually well-known building blocks in some specific way, fitting the task description. Modern AI solutions can also learn this – and some of them are already surprisingly good at it.

We could argue the AI tools do not always create good or right solutions. Well, based on the code I have seen over the course of my career, this is true for humans, too. And learning from feedback, the AI tools will quickly learn how to create better solutions, avoiding their past mistakes. The same is true regarding current security concerns, etc.

We could try to withhold the knowledge from the AI tools. But that would mean we would also deprive ourselves from freely accessible knowledge. Even if I personally think this widespread “expecting everything for free” mentality is a very questionable evolution we do not only observe in IT, I am convinced that protecting all knowledge behind (payment) walls would be a Pyrrhic victory at best: We would be cut off the knowledge as much as the AI tools. 1

Hence, my personal take is that in the not so long run AI solutions will beat us in most coding tasks. It may be 3 years, it may be 8 years, but it will happen.

Or to phrase it a bit differently:

In a few years, writing code will probably be like building a car on your own: It will be fun, but basically nobody will pay you for it anymore – especially if you are more focused on the art of building cars and making the process as enjoyable as possible for yourself than being focused on the needs and demands of your customers, and you neglect them whenever they do not suit your personal preferences. Because to be plain: This is what many self-acclaimed “hardcore software engineers” tend to do.

Therefore, instead of the typical knee-jerk responses, i.e., ignoring, ridiculing or fighting the new challenge, I would recommend to focus on a different train of thought and ask:

  • What are AI solutions really good at?
  • What are AI solutions not good at?
  • What are humans really good at?
  • What are humans not so good at?
  • What needs to be changed to emphasize the strengths of humans in software engineering instead of trying to compete with the strengths of AI solutions?

Summary

In this post, we discussed where we came from as an industry, where we currently are and what the job of a software engineer (should) comprise. We also saw that most software engineers only implement a very small part of what the role actually comprises, leading to direct competition with modern AI solutions.

In the next post, we will start to answer the questions listed above, discussing what humans and AI solutions are (not) good at to get a better understanding how we can position ourselves in a more valuable way without competing with AI solutions. Stay tuned …


  1. Currently, a discussion is gaining momentum if people should need to give consent if their work they published in the Internet may be used for training AI solutions. While I think this is a relevant discussion, I also think, at best it will temporarily slow down the progress of AI solutions but it will not prevent the progress at all. ↩︎