ChatGPT already knows - Part 4

The effects of hyper-specialization and nerd culture

Uwe Friedrichsen

12 minute read

A rattlesnake curled up on the ground

ChatGPT already knows - Part 4

In the previous post, we discussed what humans and AI solutions are (not) good at and have learned the strengths and weaknesses of humans and modern AI solutions basically complement each other. We have also seen that software engineers often do not leverage their human strengths. Instead, they often position themselves in ways that places them in direct competition with the strengths of AI solutions.

This usually does not happen because they knowingly position themselves in such ways. Instead, several more or less hidden forces push them in that direction.

In this post, we will examine two important forces: The omnipresent hyper-specialization in IT pushing software engineers into areas of human weaknesses and nerd culture (involuntarily) reinforcing the push.

Hyper-specialization emphasizes human weaknesses

Having discussed the strengths and weaknesses of modern AI solutions and humans, we can start to move on to the last point of the train of thought, I sketched at the end of the second post of this series:

  • What needs to be changed to emphasize the strengths of humans in software engineering instead of trying to compete with the strengths of AI solutions?

First, let us briefly recapitulate the strengths and weaknesses of AI solutions and humans as we discussed them in the previous post:

  • AI solutions are good at memorizing and recalling large amounts of information. They are also good at repeatedly creating software solutions for relatively well-defined, limited requirements that are backed by an existing body of knowledge. 1
  • AI solutions are not good at navigating highly complex and ambiguous environments with a high degree of uncertainty. They are also not good at taking the encompassing context of the task they are asked to solve into account.
  • Humans are good at successfully navigating highly complex and ambiguous environments with a high degree of uncertainty – even thought most humans dislike such settings. Humans are also good at taking the context into account, including implicit assumptions and expectations of other people.
  • Humans are not good at memorizing lots of information over a long period of time. They are also not good at doing repetitive work over and over again without making errors.

Reading through this summary, it becomes quite straightforward to see that the strengths and weaknesses of humans and AI solution are relatively complementary. From this perspective, software engineers and modern AI solutions should complement each other nicely.

Why is it that modern AI solutions are considered a huge threat for software engineers, that some people predict that up to 80% of all software engineering jobs will be replaced by AI solutions in the next years?

Of course, part of the answer is that many people – especially those who would benefit from such a development – tend to overestimate the impact of such developments. If I want to sell a solution, if I want to sell consulting, if I am an “analyst” and want to increase my popularity or the click-rates of my articles, if I hope to save money by replacing humans with machines, I deliberately overestimate the impact. Also, it is “best practice” in IT to massively overestimate the short-term impact and speed of penetration of new technological developments.

But even if we exclude these factors, there will still be a high impact on software engineers due to the direct competition with the capabilities of AI solutions.

From my perspective, one of the biggest contributors to this situation is the pervasive hyper-specialization in IT. The problem is that hyper-specialization and its effects emphasize human weaknesses. It rewards us for memorizing and repeatedly recalling knowledge, applying it over and over again to the same or very similar kinds of tasks. As we can only memorize a certain amount of knowledge without forgetting parts of it, it limits the area of expertise we can cover as an individual.

Most of the tasks are also routine tasks. Even if we are proud of being “knowledge workers”, most of our tasks are the knowledge work equivalent to repeatedly fastening the same screws at an assembly line if we are honest. We do the same tasks over and over again. A few details are different, but the tasks remains the same and we are rewarded for completing them as fast as possible.

Finally, most tasks are relatively simple and isolated tasks. Complex tasks are broken down into increasingly smaller and simpler tasks until the remaining software development tasks are basically simple tasks without any complexity or ambiguity left over. And if a task should not be that way, many software engineers complain the requirement (or “user story” or “task”) would be “bad”, i.e., they actively ask for simple and obvious task descriptions without any ambiguity.

While such a setting might be convenient for many people (remember: most humans dislike complexity, uncertainty and ambiguity and favor clear and simple settings with obvious rules to follow), it also leaves us in the area where we are going to compete with AI solutions – again, a competition we most likely will not win.

Nerd culture as a reinforcing factor

To sum up the previous section: Hyper-specialization pushed us into the corner, most of us currently find themselves in: merely being sort of feature implementation “machines”, repeatedly doing the same type of clearly defined tasks in a narrow area of expertise with only small deviations between tasks.

Such settings are bad for the self-esteem of the people affected:

  • We do not know the purpose of our work because we only receive small tasks to implement without knowing (and sadly often also without caring) how it fits into the bigger picture, the value-creation process of the company
  • We are expected to implement the tasks as quickly as possible. And if the task is done, the next task is waiting. And the next. We cannot act autonomously, we are tightly controlled externally. Everything that counts is to complete one task after the other as quickly, i.e., as cost-efficiently as possible. We are treated like a cog in the software development machine.
  • We are expected to implement the tasks as quickly, i.e., as cost-efficiently as possible. This usually means, we are meant to deliver just-as-good-as-it-absolutely-needs-to-be quality. We are not rewarded for mastery, for the quality of our work, for understandability, for maintainability, for simplicity or anything else, only for the speed of completion.

If we take Daniel Pink’s model what motivates people from his famous book “Drive: The Surprising Truth About What Motivates Us”, we can see the widespread hyper-specialized, division-of-labor based software development practices deprive software engineers of all three motivational factors mastery, autonomy and purpose: Most of us do not know why they are doing their work, they lack control over their work and mastery is not appreciated.

Such settings also deprive people of their self-esteem. They only get rewarded for things that do not motivate them. To not take mental damage, people tend to create their own substitute motivations and rewards. Typically, these substitute motivations and rewards are shared only between the people affected by the unhealthy conditions and form some kind of subculture. In IT, this subculture is nerd culture.

Actually, nerd culture cannot be tracked down to a single root cause. It is a combination of multiple effects and influences. But hyper-specialization and its effects massively fuel nerd culture. As software engineers are neither motivated nor rewarded by the encompassing system for doing a good job, they need to create their own mutual reward system to stay mentally sane.

This leads to a definition of being a good software engineer which is detached from the needs and wants of the encompassing system. Instead, it is mostly defined by the needs and wants of the software engineers, influenced by some selected traits of the encompassing system.

As a result, software engineers oftentimes admire other software engineers who have extremely deep knowledge in a very narrow area of expertise. As the engineers are usually detached from the business level value creation process, they focus on tools and technology as their areas of expertise – the more detached the area of expertise is from the value creation process the more it typically is admired. On the other hand, broad knowledge is not valued a lot – especially if it is connected to the value creation process.

Keyboard “wizards”, knowing all shortcuts of their IDE by heart are admired. Mouse users are pitifully laughed at. Very strict “rules” are set up how some piece of code needs to be written to be “good”, or how the code needs to be tested. And so on. Lots of dos and don’ts, usually detached from the value creation process, mostly revolving around deep knowledge in arbitrary narrow areas of expertise and focused solely on the needs and demands of software engineers. And if you want to “play with the cool people”, you better do things as the opinion leaders dictate.

In other words: A typical subculture with its own norms, rules and belonging rituals.

This is perfectly understandable, given the circumstances most software engineers need to do their work in. And admittedly, not all of the norms and rituals of the nerd subculture are bad in terms of business value. Software engineers typically are smart people after all and they see quite well what is missing in their domain to improve the overall quality of the software systems they write and maintain.

Yet, due to the detachment from the business value creation process and the missing (or non-functioning) discussion between the parties involved, the norms and rituals of the nerd culture tend to neglect the needs of the overall value creation process as well as the owners of the overall value creation process tend to neglect the needs of the software engineers.

Personally, while not necessarily sharing the ideals of the widespread nerd culture, I also do not have a problem with it. For me, it is only natural that a group of people deprived of the well-deserved recognition for their (often excellent) work create their own recognition and reward system even if the subculture sometimes loses track of what actually creates value for them.

For me, the more problematic aspect of nerd culture is that it reinforces the effects of hyper-specialization. To create a working mutual reward system within the boundaries of their encompassing context, software engineers needed to make the effects of hyper-specialization part of their reward system. As a consequence, deep detail knowledge in (often arbitrary) small areas of expertise is rewarded. Additionally, many of the rewards are tied to implementing relatively small tasks: Writing “concise” code (often confusing understandability and maintainability of code with the number of characters typed), unit test coverage, using the “right” idioms, and so on.

The problem of this reward system is that it cements hyper-specialization and its effects. To belong to the nerd culture, you need to optimize your skills and knowledge exactly in the areas that modern AI solutions are good in and probably will beat humans soon. The other way round, this means that contemporary nerd culture may be an impeding factor when it comes to change our focus as software engineers to maintain (or even improve) our value.

How to leverage our strengths

Coming back to modern AI solutions and the challenges they pose to software engineers: It should have become clear up to now that sticking to our widespread habits and routines will most likely not lead to a happy end – at least not for us. Competing with modern AI solutions in their areas of strength will most likely be futile. They will outpace us soon in those areas which require remembering arbitrary amounts of detail knowledge and creating software solutions in well-defined, limited and mostly isolated contexts.

Hence, we need to change our habits and routines and focus on our human strengths which are navigating successfully under uncertainty and ambiguity in highly complex environments, oftentimes while only having incomplete information. This has always been part of the work of good software engineers. Unfortunately, due to the effects of hyper-specialization, taking the ideas of division-of-labor too far, many software engineers forgot about this part of their work and made themselves comfortable in the small, clear and obvious workplace the division-of-labor approach assigned to them.

What does it need to position ourselves back into the software engineering’s areas of human strengths?

Here are some ideas:

  • Understand non-IT domains
  • Embrace complexity, uncertainty and ambiguity
  • Improve communication and collaboration skills
  • Train empathy
  • Broaden the area of effectiveness
  • Become a “full-range engineer” instead of a “full-stack engineer”
  • Resist the rat race
  • Find an own position regarding nerd culture
  • Be (a bit) rebellious

While this list most certainly is not exhaustive, it already contains a lot of ideas that point towards the areas of human strength and AI weakness, i.e., where we want to go. Therefore, we will dive a bit deeper into the different ideas in the following two posts.

Summary

In this post, we examined how the omnipresent hyper-specialization in IT pushes software engineers into areas of human weaknesses and how nerd culture (involuntarily) reinforces the push. We also collected a little list of ideas how we can leave that corner, we are often stuck in and position ourselves differently as software engineers – aligned with humans strengths, preserving our value as software engineers in the face of modern AI solutions.

In the next post, we will discuss the first 6 items from the list, what it needs to become a “full-range engineer”. Stay tuned …


  1. Even if the capabilities of modern AI solutions may appear “human-level” or sometimes even “super-human-level”, we need to remind ourselves that the actual reasoning capabilities are quite limited. What makes the capabilities appear so impressive is the mere fact that the reasoning is backed by an enormous amount of data. Basically any human with access to the same amount of data could easily come up with the same or a higher quality of reasoning. The fact that most tasks in the contemporary world of working only require such shallow reasoning capabilities is a completely different story which is outside the scope of this blog series. ↩︎