Towards greener requirements
A huge untapped lever for sustainability
Towards greener requirements
I did not yet write a lot about sustainability in this blog. However, I think it will become one of the mega-drivers of the upcoming years in IT. As a consequence, I think it is sensible to start pondering how to include sustainability into our daily work.
Like resilience, sustainability has a lot of facets and dimensions and will I lay them out in a future foundational blog post. All of these dimensions affect IT, resulting in better working conditions, better business agility, higher society value (i.e., customer value based on society needs) and a reduced ecological footprint, if done right.
In this post, I will solely focus on the ecological dimension of sustainability.
Reducing green IT to infrastructure is not enough
There are people who say we should leave green IT to the hardware vendors and data center operators. And admittedly, this is an important aspect of reducing the ecological footprint of IT:
- Creating more energy-efficient hardware that provides more compute power per joule consumed
- Building more energy-efficient data centers with a better PUE value (Power Usage Effectiveness, the ratio between the overall power consumption of the data center including cooling, etc. and the power consumption of the IT alone)
- Improving the utilization of the hardware used, not only to reduce the amount of resources and energy used to build the hardware, but also because the energy efficiency of IT hardware tends to become better with higher utilization
- …
We should do all of these things. They are an important pillar of making IT greener. But they are not enough.
As we know from other topics like performance or security: If you want to become really good at something, you need to optimize at all levels, not just one level. So, it is not enough to only rely on the hardware manufacturers and data center operators for a good ecological footprint. You also have to look at everything on top of it like rightsizing, adequate architectures, sensible implementation up to making good decisions what to implement in the first place.
Personally, I think most people who advocate to leave green IT to hardware vendors and data center operators try to evade their responsibility for ecological responsibility: By shifting the responsibility to a different party, I can continue acting as I always did without taking over any responsibility for what is happening.
We sadly find this attitude in many places inside and outside of IT and at least I think, we should not let those people get away with such an irresponsible behavior.
Jevons paradox in IT
What makes things worse is that Jevons paradox also strikes in IT.
Wikipedia defines Jevons paradox as follows:
In economics, the Jevons paradox occurs when technological progress or government policy increases the efficiency with which a resource is used (reducing the amount necessary for any one use), but the falling cost of use increases its demand, negating reductions in resource use.
This definition feels a bit unwieldy. Thus, let me explain it in a bit different words:
- We have a resource (in our context: Compute power, storage and network).
- At a given point in time a certain amount of compute power, consisting of CPUs, memory modules, hard disks, network adapters, etc. costs a certain amount of money. For the sake of simplicity, let us say it costs us 100 € per month to satisfy our compute power demands.
- Now the resource providers (in our context: The hardware or data center providers) improve the efficiency of the resources. As a consequence, we get more compute power for the same price – here 100 € per month.
- This again means, we need less resources to satisfy your processing and data storage demands. E.g., we only require resources costing us 80 € per month.
If we apply this idea globally, it would mean with increasing efficiency of compute, storage and network resources, we need less and less of them (assuming the production costs and raw resource consumption also decrease according to the resource sales price, i.e., go down by 20%).
But in reality, we use more and more compute resources every day even if they have become orders of magnitude more efficient over time.
This is Jevens paradox: The increased efficiency of a resource results in a disproportionate increase of demand.
IT resources have become orders of magnitude more efficient over the last decades:
- An average mobile phone of today has more than 10.000 times the compute power of a mainframe 50 years ago.
- In the early 1990s, most companies worked with 4 MB token ring networks (if they had some IBM mainframe connected to their network) or 10 MB Ethernet. Today, 100 GB networks are the norm in most data centers and 400 GB is on its rise.
- Storage costs dropped from around $450.000/GB in 1980 to around $0.03/GB in 2014 (and down to around $0.014 in 2022)
- Etc.
Still, the cloud providers build additional data centers all the time being confronted with ever-increasing resource demands. At the same time, we as users are confronted with the same slow, lagging and stuttering applications we already endured 30 years ago.
At least it feels so. Of course all those applications have become a lot shinier. They have fancy animations everywhere that do not add anything to usability but looking nice – and consuming a lot of resources. M ost applications implement tons of features, we rarely or never need. And so on. 1
So, Jevons paradox is strong in IT – very strong. All increase of hardware and infrastructure efficiency is overcompensated by far by a disproportionately increasing demand.
This massively affects the ecological sustainability of IT: If we leave green IT to the hardware and infrastructure layer, all advances made at this level will be negated by increased demand – growing ecological footprint guaranteed.
Wasteful requirements
So, what can we do at the software development level to counteract this trend?
There are a lot of things, we can do and I will discuss several of them over the course of time. Based on my observations and experience, there are a lot of factors that add to creating wasteful applications. I described some of them in my “Simplify!” blog series. 2
In this post, I will focus on requirements.
An old saying (not only) in the context of IT system development goes: Garbage in, garbage out.
In other words: If you pose pointless requirements, your application will not do anything useful based on them. Hence, the quality of the requirements significantly influences the quality of the resulting system (Note that I do not refer to the quality of how well the requirements are captured, but to their inner quality, the quality of their contents).
The same is true regarding ecological sustainability. No matter how green your hardware and infrastructure is, how well-designed your architecture is, how carefully green-crafted your code is: If your requirements demand ecological waste, your application will be ecologically wasteful.
As I have written before: A lot of stuff has been implemented over time that does not make a lot of sense from an ecological perspective: Lots of bells and whistles that look fancy but do not add any functionality to the application. Features that rarely anyone uses. And so on.
Doing it better
But how did those requirements slip into those applications? I mean, nobody intentionally creates wasteful applications (at least I hope so).
I think, a big contributing factor is missing feedback. Very often, the originators of requirements have no idea how wasteful their demands are. As a consequence, they require features without taking their ecological impact into account.
How can we do better?
Here is a little idea how we might be able to improve the situation. To introduce the idea, let me first discuss a shortcoming, I have observed in many agile projects and how we could tackle it. Afterwards, the idea how to tackle ecological wasteful requirements falls into place naturally.
Excursus: Ordering agile requirements
In many agile projects, the only metrics associated with requirements is their estimated implementation effort. Usually, requirements in agile projects take the form of “user stories” and their effort is estimated in many ways: Story points, t-shirt sizes, etc. 3
Now, the question is in which order to implement those user stories. Scrum, the predominant agile method expects an ordered backlog, i.e., the order of the stories in the backlog determines their implementation order.
This makes sense if the product owner (PO) is actually a product owner, i.e., a person who is deep into the business and can judge the expected business value of a story, its importance, its urgency, associated risks, business-level dependencies, and so on. Then, the PO can create an initial order putting the most valuable stories first.
After the effort estimation, the PO can add the implementation effort to her considerations. Based on that relation between business value and implementation effort, she might decide to change the order, or to revisit the story to reduce the expected implementation effort, or to drop the story altogether. So far, so good.
Unfortunately, in many projects the POs are not actual POs, but sort of lightweight requirements engineers, persons who gather the requirements from the actual POs, and control the implementation of those requirements. They are not “owners”. They are “proxies” at best. Often, they are mere heelers who do what they are told.
How to order user stories in a sensible way in such a setting? The persons who know about the business value of the stories are usually not available. Actually, usually that is the reason, PO proxies are installed: To shield the real POs from the IT projects and all the questions and discussions with software engineers who would steal their valuable time. 4
Thus, the PO proxy and the team try their best to create some more or less useful order. As the only metric they have is the implementation effort (which also incorporates the technical risk), they typically order the stories along the effort and technical risk.
Additionally, they do not question any of the requirements. They are done after the last requirement is implemented. Due to the lack of an actual PO, they cannot adjust or drop any requirement even if it is very effortful to implement while providing little business value at the same time.
As a consequence, the implementation order of the stories can be (and often is) of little value for the company: Relatively irrelevant requirements are implemented first because they are either easy to implement (“deliver ‘value’ early”) or they contain a lot of technical risk (“investigate high-risk requirements early”).
From a business perspective, the order appears to be arbitrary and pointless. And only after the fact the actual product owners realize that some of their low-value requirements took a lot of effort to implement – efforts they would have redirected if they would have known about the efforts needed to implemented those requirements.
In the end, of course a feedback loop between the actual product owner (originator of the requirements) and the development team (implementer of the requirements) is missing and it would be best to get rid of the proxy PO and connect the actual PO with the team.
But assuming that this will not happen in most companies: How can we help the proxy POs and the teams to come up with a better user story order?
My suggestion is to add a second metric to the story: The “business value” of the story. The business value describes the relative importance of a user story from the actual PO’s point of view. It condenses the expected business value of a user story into a value, its importance, its urgency, associated risks, business-level dependencies, and all the other things that the actual POs have in their minds.
Again, these values can be “value points” (like story points), t-shirt sizes, or whatever value system works best in the given context.
With this additional information, the proxy PO and the team are able to make much better decisions. Now they can judge the relative importance of a story based on business and IT aspects. They can detect if a low-value story requires a high implementation effort and reach out to the actual PO to clarify if the requirement should be modified or dropped. And so on.
While this additional value most definitely is not a replacement for a direct feedback loop, it still would improve the situation a lot.
Adding an ecological footprint
Let us end the excursus here and come back to the original question: How can we avoid (or at least reduce the probability of) wasteful requirements?
With the two values of the excursus in place, the expected business value and the expected implementation effort, the idea becomes straightforward: We need a third metric which provides an indicator for the expected ecological footprint of the user story at runtime.
From the excursus, we have two metrics: One estimating the relative expected business value and one estimating the relative development footprint, describing the expected implementation efforts.
The third metric, I propose is an operations footprint which describes how many resources the given user story is expected to consume at runtime. This metric can then also be used as an indicator for the ecological footprint of the story. 5
This metric could, e.g., be estimated by a group of architects and operations experts 6. Again, “runtime points” (like story points), t-shirt sizes or whatever other metric works best in your context can be used.
This additional estimated metric enables valuable discussions about the runtime impact of a story. It enables questions like, e.g.:
- “This low-value requirement consumes a lot of resources at runtime. Should we modify it to get along with fewer resources or drop it altogether?”
- “This story has very low implementation efforts, but very high runtime costs. Is it possible to reduce the runtime costs by putting some more effort into a different implementation of the story?”
- “Do we actually need 99,9% availability for this requirement (which typically implies multiple replicas of the application running at the same time? Or would it rather make sense to split up the application in two or more parts at runtime and have parts with reduced availability requirements that require less resources at runtime?”
- “What if we relax the response time requirement from 200 ms for 99% of all requests to 400 ms? How big would the business impact be compared to the resources saved at runtime?”
- …
It would also influence the implementation order because now wasteful requirements become visible and like implementation risks it can make sense to explore solution options early to figure out if there are alternative approaches that have a lower operations footprint.
Such an additional metric would not result in a perfect world. There will still be people who will insist on their requirements, no matter how wasteful they are. There will still be people who do not care how wasteful a requirement will be at runtime as long as the implementation efforts are as low as possible and the ambitious (or sometimes unrealistic) release deadline is met.
But it will become harder to ignore the consequences of shortsighted or egoistic actions at runtime, to ignore the ecological impact of decisions made. And it will help people who really want to create greener software (which I think is the majority of people) to make better decisions and spark the required discussions.
Summing up
I am convinced that sustainability will become a huge topic in IT, much bigger than it currently (already) is. Sustainability has more dimensions than just ecological sustainability, but the latter is definitely an important topic.
We cannot leave ecological sustainability solely to the hardware vendors and the data center operators. We need to address it at all levels to achieve a good result and evade Jevons paradox.
There are several aspects that influence the ecological sustainability at the software development level. An important aspect are the requirements: A software solution cannot magically become “green” if the requirements are very wasteful.
An important driver of wasteful requirements is the fact that most originators of requirements do not know how wasteful their demands are at runtime. An idea to address this problem is to add an “operations footprint” metric to the requirements (or “user stories” in an agile context) that makes the estimated runtime impact of the requirement visible.
This is a very simple approach – which is a deliberate decision: I want something that is easy to understand, easy to implement and does not require a lot of effort because such properties significantly lower the barrier to implement such an idea. For the sake of easy adoption, I accept that the metric is not too precise. 7
And even if the approach is very simple, I think it would improve the situation a lot: All people involved see how wasteful the requirements are expected to be at runtime which in turn enables better discussions and decisions.
Of course, there are also other drivers of wasteful applications. But those will be the content of future blog posts.
Still, I hope I gave you some ideas to ponder. Maybe you have a better idea. If you do, please share it with the community. We still need to become a lot greener …
-
Of course, user experience has become better over the last decades. But it has not become a thousand times or more better as the excessively increased resource consumption would suggest. ↩︎
-
Application simplification is not the same thing as creating ecologically sustainable applications. Thus, not all observations of my “Simplify!” blog series also apply to ecological sustainability. Still, both topics have a big overlap because simpler applications also tend to be less wasteful. Note that “simpler” does not mean “easier”. E.g., it might be easier to add something via adding comprehensive libraries that do a lot more than actually needed than implementing the required logic oneself. But it will not result in a simpler application because all the accidental complexity of the library is still part of the application. For the inevitable black-or-white thinker: This is not a call not to use any libraries. Using libraries often is very useful and sensible. But I have also seen a lot of places where it was very wasteful, especially if the library used was very powerful. It is a trade-off, you always should consider before adding a library. ↩︎
-
I will not dive into the “no estimates” discussion here. Estimations have a value. The problems start if you treat estimations as guarantees and not as – well – estimations, i.e., best guesses based on the knowledge available at the moment the estimation was made. Most of the “no estimates” discussion IMO addresses the problems that arise if estimations are treated as guarantees. ↩︎
-
It should be obvious that the practice of shielding the actual product owners from the software development teams is highly counter-productive when looking at the bigger picture. This practice basically guarantees that much more time and resources are wasted with erroneous development than the shielding of those persons saves the company. Still, we can see this harmful practice in many companies. ↩︎
-
I know that this number does not precisely reflect the ecological footprint of a given requirement. Calculating the actual ecological footprint is much more complicated. Still, this estimation would be a useful approximation in most situations. And for sure it is a lot better than having no estimation at all about the runtime impact of a user story. ↩︎
-
To avoid pointless discussions: I do not advocate for an architect role or an operations expert role. I just want the people who have the required architecture and operations skills and knowledge to come together and make a sensible estimation. If someone from the development team has the required expertise: Great! If not: Find someone who does. ↩︎
-
In many situations, I think we can use an obvious imprecision of estimations to our advantage. An obvious imprecision makes it easier to remember that this value is just an estimation and not exact science, that many things can happen along the way that might invalidate the current estimation. If the estimations are too precise, they tend to be taken has exact predictions, as guarantees – with all the associated consequences we suffer from so often. ↩︎
Share this post
Twitter
Google+
Facebook
Reddit
LinkedIn
StumbleUpon
Pinterest
Email