Last year I decided to add an improvement to my home office, a treadmill. I thought about during the total lock-down we suffered in Spain in spring 2020. The reason for not trying it earlier was that I was skeptical about keeping my productivity level high while walking on the treadmill. Finally I made the decision to try out and asked for one to The Three Wise Men. They were kind enough to bring it.
After a few weeks using it, I would like to report about it since several of my colleagues at MBition/Daimler asked me about it. I hope that reading about my experience is useful to you.
Select the right treadmill
I started by doing some research on which treadmill to buy. The main characteristics that led my decision were:
Walking vs running: I wanted a treadmill for walking, not for running. There are quite a few models that has 6 to 7 km/h as speed limit and no support or add-ons to hold on to while walking. The intention is to avoid any clashing with my standing desk converter.
I am a big man so I need a big treadmill: the market offers treadmills with different length, surface and weight limits. I searched for one that can afford a big man like me and that was as wide as possible. Falling from it was a concern.
Remote control: since you will be walking while working, you need a way to control the treadmill remotely, either with a remote control, an app or similar. I selected one with a remote control.
With wheels: I need to move the treadmill often since I alternate using the treadmill with working standing up or seated. I wanted one with wheels so it is easier to move. Moving it to clean the office was also a requirement. Treadmills are heavy.
Limited investment: my idea was to try out first so I did not want to invest a lot up front. If it works, I will invest more in a replacement when needed.
Pay attention to the noise level: my treadmill is noisy although not enough to be annoying or to be heard by others while in video calls. Check the specifications, I did not so I simply was a little lucky..
I bought this model, produced by 2WD, ASIN/model B08GLTX8LK, through Amazon for €299. I had no issues with the purchase.
I use now the treadmill between 2.5 and 3.5 hours a day. I started using it between 5 and 6 hours but soon I got hurt in one foot which forced me to stop for some time. First lesson learnt: take it easy, specially if, like myself, you are not in great shape.
At first I ended up so tired that it was hard for me to keep good levels of concentration during the last part of any working day. As I get more used to using the treadmill, I feel more energetic in general so that tiredness is little by little going away. My productivity is increasing overtime, specially compared to the first few days. The last few weeks I even have energy left to do some additional soft exercise after work.
I started walking at 2 km/h. I increased to 2.5 km/h after 2 weeks. A few weeks later I tried 3 km/h but it was harder for me to concentrate. It is too fast for now. There is a slight mismatch by the way between the speed and the distance walked marked by the treadmill. I think that the average speed I walk is a little below 2.5 km/h. In am currently walking around 7-8.5 kms (10k steps is around 8 kms). In any case, I try to reach the 6.5 kms mark every day. I am accomplishing it regularly during the last few weeks.
Can I work normally while walking?
This is the question most colleagues ask me. The answer is yes… for most activities.
My experience is that I can work normally while walking, keeping good productivity levels, while performing most type of activities, specially during most meetings.
In which activities have I detected a reduction of productivity?
I detected a productivity decrease in those activities which require high levels of concentration or creativity like complex meetings that I facilitate and include many participants, those times when I need to come up with new ideas, analysis of complex data, etc. The good news is that I did detect this productivity reduction early. The longer I have been using the treadmill though, the more type tasks I can do without noticing a productivity reduction. So I plan to try again during the coming weeks some of those activities that I dropped early on while walking.
Walking is helping me a lot to go through those days with back to back meetings. I pay better attention and get less bored compared to standing up. As I get more used to walking, my energy levels during the afternoons are increasing, as mentioned, which helps me to get through the last couple of hours during long working days. It was not like that at first though, so plan accordingly.
The unexpected surprises
The treadmill uses IR to connect to the remote control which interferes with an IR-based device I had at my office, the HDMI hub. I use two laptops and two monitors and used the HDMI hub to have one of the two screens assigned to one of the laptops. The default config though is to have both monitors assigned to my working laptop. I did not find a way to prevent such interference so I had to remove the HDMI hub. I am looking for alternatives although the impact in my daily workflow is low. The problem was easy to detect because every time I turned the treadmill on, one of my screens turned off although both laptops detected the corresponding monitor. So the takeaway is: watch out for IR interference.
I trickier surprise for which I have not found a solution yet is the problems associated with the PLC. I have my office connected to the living room, where the fiber router is located, through PLC. This has been my default set up since years in several houses. Something related with the treadmill makes the link between both PLC devices unstable which ends up in connectivity issues. I have a backup connection for the cable link which is a wifi replicator including a specific wifi network for the devices I use for work (in other jobs I used to play with boards which required a testing network). So sometimes I have to use wifi instead of cable because the link between both PLC devices get unstable. It took me a while to detect the source of issues. After testing the PLC device in every plug of my office, I ended up finding one which makes the connectivity issues tolerable. Still I am searching for a solution here..
The takeaway is that the treadmill consumes quite some power and might have spurious looses that might turn network connectivity through PLC unstable.
Other points to consider
Some days I start the working day walking and some others I walk after lunch or when I have back to back meetings. I tend to change the set up putting or removing the treadmill no more than once a day. Usually I have an additional set up change when I push down my Varidesk to work seated. The idea is to reduce configuration changes to the minimum.
The standing desk converter I have is big enough to allow me to hold on to it while walking when I do not have my hands on the keyboard. That have saved me from falling down during meetings more than once. Be careful, falling down is a real risk. Take precautions.
Get professional advice about walking with or without shoes. In my case, I walk without shoes. This decision might change in the future. My knees are awful due to injuries from my basketball times. Pay attention to the way you walk. When you get tired, your walking style changes and you might end up hurting yourself.
My treadmill has a limit of 100 minutes. When it reaches that threshold, it stops. I was annoyed at first but it has ended up being a great thing. I used to work in cycles of 110 or 120 minutes but sometimes I loose track of time and this hard stop helps. While walking, my working cycles are now shorter which help my concentration levels, specially at the end of the day.
Obviously during meetings participants notice that I am walking. Be ready for questions and jokes. Having a treadmill is not mainstream yet. Ah, and do not fall while in a meeting.
At first I overdid it, got hurt and tired fast so my productivity in the afternoons dropped. In my second try I took it easier and the experience is getting better overtime. Six weeks after being back at using the treadmill I am improving. I will soon be ready to increase the time walking and the type of activities I execute while on the treadmill.
I think it will take me another few weeks before the benefits for my health and shape become evident. I can see early signs though. So in general I find the experience positive so far although I expect to get better after a few more weeks when, in addition to the walking done on the treadmill, I have energy to complement such exercise with additional one on regular basis. The same applies to the level of concentration while doing the most demanding tasks.
The connectivity issues I am experiencing are unfortunate although tolerable. I need to find a solution.
So yes, I totally recommend to get yourself a treadmill. Results will show up sooner or later depending on your prior shape. I think that if you are in better shape than I was when I got it, which is not hard, results will be noticeable in a couple of months, probably less.
When your team goes remote or when you are creating a new remote or distributed team, you need to reconsider the most basic ground rules. Most are a given when colocated. One of these ground rules to reconsider is people’s availability.
At the office, you expect people to be available more or less at similar times, even if your organization promotes flexi-time or core hours, such expectation is mostly there. But when you go remote or even in the case of companies moving towards flexi-days (many will after COVID-19) availability is something that needs to be carefully considered and agreed within the context of the team or department.
This article will focus on one of those ground rules, availability, including a simple but powerful way of starting the conversation with your team members about it, which has a major impact in scheduling.
I have written before about the need to redefine those ground rules when going remote in several articles. I list them at the end of this article, in the References section. I mentioned in one of those articles that my former colleague back during my Linaro days, Serge Broslavsky, showed me a visualization to start the conversation about availability that I found so useful that I use it ever since. I have mastered it over time, have used it frequently and even assigned it a name: availability heat map. But before describing what it is, let me start by justifying why you should focus energy in reconsidering availability.
In remote environments, be explicit about availability
When remote, each team member works in a different environment even if they are located in the same geo area, time zone or if they share the same life style. I always assume as starting point that their environments might be very different from each other, so their availability might be too. It needs to be agreed, which requires a careful conversation.
Some people live with others at home (friends, partner, etc), they might have different responsibilities towards them and in some cases, those around them affect the environment in a way that it is not possible to assume that their availability will not be affected. In some cases, people work in cafes, coworking, etc. which involve other constrains.
Another typical case where availability becomes a topic is when having team members from different cultures. Different cultures have different approaches to lunch, for instance. Northern Europeans tend to have lunch very early, Central Europeans usually take lunch in no more than one hour (British even less in general). There are plenty of cultures out there that loves to promote to kill themselves slowly by eating fast and poorly at lunch :-). There are others though that take lunch seriously so they take more time. It is a social activity that, in some cases, is very important for families and at work. Latins tend to fall in that category. At the office, the environment influence these habits making them more homogeneous but that is not necessarily the case when working remotely, at least not on daily basis.
I have managed teams where the availability in summer changed compared to winter for people that lives up north or in very cold or warm areas. They might want to take advantage of the daylight during noon in winter or prefer to work during the mid-day because is too warm outside.
An interesting consequence of revisiting availability that I pay attention to is the expectations out of office hours related with communication channels. I have worked with people used to phone others when they are at the office but their colleagues are not. If I am at the office and the most of my team is too, it is ok to call who is not by phone. The heat map also helps to open a conversation about the consequences of not being available and what to expect. It helps these kind of people to understand which channel should be used to reach out to you and when.
A third interesting case is people that multitask or work in more than one project. Also teams with dependencies on other teams which have a different understanding of availability. This case is very frequent in remote environments. Discuss and agree on availability become a ground rule that should be taken seriously since day one.
What is the advantage of working from home if you cannot make your personal and work life compatible to some extend? A better life balance is a big win for both, the employee and the employer. Having a serious thought about availability is essential for achieving it. As a manager, I have had cases in which the remote workers where in coworkings instead of at home because the company did not provide to them the tools to create such balance. That should be avoided when possible.
My point is that going remote requires a conversation about availability that you most likely do not need to have at the office, so inexperienced managers or teams in remote work often take it from granted. Once they realize the problem, it might be hard to redefine availability or even impossible. In extreme cases, you might only find out when burning out is getting closer. Funny enough, I have found more of these cases among managers and freelancers than employed developers throughout my career. It has to do with team protection.
The availability heat map
In order to start such conversation, ask each member of your team or department to fill out the availability heat map as first step, ideally right after they join your organization. After analysing it, you will have a better idea of the impact that living in different environments as well as other factors like time zones and personal preferences will have over people’s availability. You will be in a much better position to discuss the team or department schedule, which will be reflected in the calendar (if possible), making it compatible with company policy or business needs.
In summary, make the availability explicit, compared to colocated environments, where availability is implicit in general. The availability heat map is a simple initial step to do so.
Who is it for
I have used the availability heat map with the following groups. I assume this extremely simple activity can work for additional groups:
Teams with members in different time zones.
Large remote teams.
Teams with members who belong or support more than one team.
Teams with strong dependencies with people from other teams.
Teams with people with small kids.
I tend to use four colors in the availability heat map. Each color has a specific meaning. The goal is to assign a color to each hour of the day, as shown in the example. I came to this scheme over time. You can adapt it to your experience or environment :
Green: you are in front of the computer and available for the rest of the team on regular basis.
Yellow: you might be available although it cannot be assumed by default. It might depend on the day, time of the year, workload.
Amber: you are usually unreachable at these hours unless it is planned in advance. It is an undesired time slot for you by default.
Red: you are available if an emergency or under very unusual circumstances only.
The usual ratios of hours I have worked with in the past are 4-6 green hours, 2-6 yellow hours, 2-4 amber and 8-12 red ones. Do not try to show many green hours at first. This exercise is not to demonstrate you work 8 or more hours a day, which it is a common mistake among junior (i remote work) newcomers when they join a new organization or team. The price in your schedule might be very high and eventually unsustainable over time.
Explain the exercise
My recommendation is that you explain face to face (video chat) to the affected people the exercise, with your availability already introduced, before asking others to fill out theirs. People from different cultures and background respond differently to this activity based on cultural factors or prior experience with managers and remote work. In my experience, some people take at first this exercise as a control one, specially if you are the manager or PO instead of the Scrum Master or facilitator.
The goal is to find out the ideal time slots for scheduling activities but at the same time, as manager, you can take this opportunity to learn about people constrains and desires when it comes to working hours. I use this action as starting point for some 1:1 conversations. I mentioned before that when remote, each employee works in a different environment and such environment affects their performance. As a remote manager, you have to learn about it and provide guidance on how to establish a good balance so they maximize work efficiency in a sustainable way. It is not about interfering into their personal lives. The line is thin.
In this example, we have five team members where the last two live in different time zones, UTC-5 and UTC+2. After each member fills out their desired/expected availability, the conversation about scheduling becomes easier. Each team member as well as managers and other supporting roles have a simple way to understand what kind of sacrifices each member might have to do to be available to their colleagues, making their availability compatible with the business needs as well as their team needs (ideally those should be very similar). The scheduling of the team and department ceremonies and other company activities hopefully become easier now. Understanding when the real time communication is effective and when the work should become asynchronous also become simpler.
In this case, thanks to the fact that Kentavious is an early bird and that Kyle is used to working with people in Europe, from the US East Coast and Brazil, they have already adapted their availability to work with those on these time zones. As you can see, the approach to lunch is different for each team member. In addition, Anthony has to finish work early and Marc prefers to work before going to bed, which is a common pattern among parents with small kids.
According to the map, there are two overlapping hours. If I would be the manager or part of this team, I would talk to them in group to expose that increasing the number of overlapping times bring benefits to the overall performance of the team. I would talk individually then with each member to find out a way to have one or two additional overlapping hours. In general, I would consider three or four hours of overlapping availability enough as starting point in this case. I always favor a homogeneous expectation of availability throughout the week that having “special” days where your schedule changes. In a previous job I had my “Tuesdays for Asia” and my “Thursdays for US” and believe me, it was not fun.
After a conversation and decision process, it would be good to update the availability heat map. I suggest to make it available to others. If your organization or project is formed by many teams, you might want to add the availability heat map to your team landing page. In my experience, it helps when scheduling activities with specific teams by people which are not directly related to them on regular basis.
If you have a tool where you can create and maintain a team calendar, try to add the common available hours there and make them visible to others. If your team is a service or support team to other teams, you might want a more powerful tool to communicate your availability but the availability heat map might do the job at high level.
There are tools out there to accomplish the same goal than the availability heat map, but I like simplicity and I never needed anything more complex, assuming you have a powerful corporate calendaring tool.
Finally, please keep in mind that the availability heat map is a dynamic visualization. Revisit this ground rule on regular basis, at least on summer and winter. Small but significant changes might apply.
In a variety of use cases, especially related with remote work, there are basic ground rules that need to be reconsidered. Availability is one of them.
The availability heat map is an extremely simple action that can provide a first overview of the overlapping times and can trigger a conversation to increase or adapt those hours, as previous step to define when the team ceremonies might or should take place, how the communication should happen when, etc.. It is also an interesting action to trigger 1:1 conversations with your managee or colleagues. It s simple and easy to adapt to many use cases.
If you have a different way to reach the same goal, please let me know. If you like this idea and will adopt it, please let me know how it goes and which adaptations you did. I am always interested in improving the availability heat map.
Previous articles I wrote related with remote work:
This is the second article of the series. Please read the first post before this one.
During the previous article I explained the process to follow, using the simplest possible model to describe a software product delivery process, to measure and improve its performance, following a data driven improvement kata as a way to promote a continuous improvement culture .
Despite providing extremely valuable information, once we have gone through the described process for a few iterations, the limitations of such a simple model will become evident. We will need to add complexity into our model, getting closer to the real software product delivery process.
Add complexity to approach reality.
If you have a clear map of your real delivery process, you can skip this step. If you do not, the first thing I recommend is to run a value stream workshop to discover and describe your delivery process. Visualizing it as a group is a great exercise, not just for those involved in the workshop, but also for the rest of the organization. If you are not able to draw the flow of the code end to end, identifying what happens where and who is responsible for what, trying to measure the performance of the process is often pointless.
How should the model be enriched?
For our purpose, we do not need a great level of detail as outcome of the value stream workshop, just enough to create your new model. Let’s assume for now that one of the outcomes of such workshop is a drawing like the one below, which I took from an exercise for an automotive platform. Only the “green critical path” is shown here. I hope you get an idea of what is needed.
Based on this diagram, you can see clearly that the delivery process can be divided in three or four stages. Try to do it in three, again, for simplicity. For this example I will divide it in four since, in automotive, the validation/verification stage is relevant in terms of effort, complexity, throughput and specially stability.
So our richer model can be expressed in the following way:
This new model is a better approximation to our real delivery process.
Mathematical construct and quantitative analysis
Once we have described our system using a richer model, the question is, do we need to change our mathematical construct?
As you can guess, the answer is no, we do not. We just need to enrich it. The construct used to characterize the simplified model would have been very limited if now we would have needed to modify it.
Again, Steve Smith provides in his book “Measuring continuous Delivery” what we need to extend the metrics. The overall idea is to use the same metrics and measures for the entire process (as we did with the simpler model) and for each stage (corresponding to this new model). This is essentially the case for any linear process, like the one we created. S. Smith refers to these “extensions” of the metrics and measures as indicators.
It becomes now harder to define the events to extract the data sets from since the data probably needs to be extracted from different tools. I recommend to invest time in describing these events, how the data sets are extracted from the affected tools, how to convert such data into the right units… You will also need to describe accurately which events determine the beginning and end of each stage. All this effort will allow you to iterate in the process creating more complex models.
The methodology to measure, process, plot and analyze the different data-sets will be analogous to the one described in the first post of this series. The only difference is that now it should be done for each stage, in addition to the overall process.
It is easy now to realize why we tried to keep our scale simple. The number of combinations we need to work with increases now. The scenarios for each stage are consistent with those described for the previous (simpler) model, so the qualitative analysis done for both models complement each other.
The definition of the scenarios can be personalized and adapted to your needs. Since you will also use those scenarios as a communication tool, make sure they are described in away that resonates to your workforce. Reduce their number to those that make sense to you and ignore the others.
You should execute the same qualitative analysis for each stage of the delivery process to describe in simple words the scenario that better describes the current performance of your delivery process as well as the target scenario. Please remember that the goal is to improve the performance of the overall process not to optimize any specific stage. If an improvement done in a specific stage does not have a positive impact in the overall process, you need to question the consolidation of such improvement.
Data Driven Improvement Kata
This step will require changes. Now that our system model is more complex, the methodology to improve performance of the delivery process will need to consider the company organizational structure as well as the cycles in which the business, product and development teams operate.
I suggest to consider two cycles whenever you can. In my example (very academic, to explain the process) I have consided three: business cycle, product cycle and experiments cycle.
Please check the references (at the end of the first article) to learn more about improvement kata. I will make some considerations about the above example for those of you less familiar with continuous improvement methodologies:
The overall idea is to synchronize the three cycles.
Business cycles are usually a year long. I would structure the business iterations in quarters.
Define the business goals for the product and relate them both, qualitatively / qualitatively with the metrics / scenarios.
Define the current scenario and the target scenario for the current quarter. Be concise.
Define the most suitable cycles for your product. Bare in mind that some effort in analysis, coordination and communication with the workforce will be required on each iteration so too short cycles might bring unnecessary overhead.
If the development teams work in weekly sprints, I would start with monthly iterations for the product level. If engineering teams work with two weeks sprints, then consider iterations of six weeks at product level.
As described in the previous article for the simpler model, I will use the boards to explain how the iterations are defined.
The first board correspond to the business cycle, the second one to the product and the third one to the technical experiments to be performed by development teams. Click the picture to enlarge and read the advises.
In this second article, we have enriched our model introducing more complexity (four stages). We saw how the mathematical construct should be extended to characterize each stage, which allow us to get a finer grain information, increasing the accuracy of our analysis. We have seen that Throughput and Stability remain useful as key metrics, as expected.
We have also discovered that, as it happened with our quantitative analysis, the qualitative one is essentially an extension of the one introduced in our previous article for our simpler model. We have modified our approach to continuous improvement adapting the data driven improvement kata to a more complex environment. The proposed example. covers, in my view, large organizations. Such data driven improvement kata has been summarized in three boards, one per level: business, product and technical.
These two articles represent a modest attempt to summarize and structure in 5 basic steps the process that an organization should go through to improve its delivery process performance at scale using Throughput and Stability as guiding lights.
Obviously writing about this topic is easier than implementing it so I am looking forward to read about your own experience and how it contradicts, modify, complements or support the described methodology and advises.
This is the first of a two articles series about the topic. I recommend to read this one first. Click here to open the second one.
As many other graduates in physics, I have passion for Feynman. His explanations of complex concepts made them seem reachable for students like me up to a point where you develop a taste for simplicity. Producing software at scale is complex, but if you have some basic and often simple concepts clear and you keep passionate about simplicity, you not just be able to better understand the management challenges ahead of you but also to communicate them more effectively, as well as the potential solutions.
As manager and currently consultant, I always feel that a significant part of my job is to put focus on simple things that are often forgotten or ignored. I keep in mind Feynman words and try to stick to them.
“If you can’t explain something in simple terms, you don’t understand it.”
These series of articles are an attempt to explain in simple words how to approach software product delivery performance process metrics in production environments at scale. I will use automotive as main example because it is the one I am currently working on.
The ideas and practices you will read in these two articles are not mine. I just summarize other people ideas, structure them and add some bits here and there based on my own experience. You will find key references on this topic at the end of this article.
Simplicity, when done wrong, leads to inaccuracy and sometimes to a disconnection with reality, which makes the concepts about to be explained, inapplicable. If you see any of that in this article, please point me to it and I will work it further. I use some of these explanations on my daily job so, in the same line, if you help me to improve them, I would really appreciate it.
Why the software product delivery process?
When you analyze any software portfolio life cycle developed at scale in industries like automotive you can identify several phases (check the image).
From all of these phases, the delivery process is the one where you can apply more engineering by making it systematic. Continuous Delivery is becoming increasingly popular as a set of principles and practices to do so. It is also important to add to CD a release approach that meet your needs. If continuous deployment does, then it should definitely be the selected one.
I will concentrate on the delivery phase of the product life cycle.
Approach to follow
As Continuous Delivery gets more and more traction, the understanding of any software product delivery process as an engineering process increases, especially at scale. This necessarily leads a the need to strength data driven decision making processes within organizations.
Telemetry is applied to areas like operations or customer support, for instance. Data is collected, aggregated, plotted and then it is used as input to support analysis and decision making processes. This way of using data is very similar to what science was about in its early days, based on observation. Yes, sophisticated, but observation after all.
That is a useful approach specially for simple systems but it is limited when the goal is to support decisions meant to affect significant parts, or even the entire, delivery process, specially when it is complex.
At scale, we know that optimizing locally does not necessarily leads to global improvements. At the same time, in order to optimize a system globally, we know that we might not require to optimize each one of the stages of the delivery process.
I have invested energy a few times to persuade executives and managers that a “bottom-up” approach (from local to global), frequently based on telemetry, needs to be complemented with a “top-down” (global to local) approach, a more systematic approach, a more “scientific” one.
Sometimes I refer to this approach as applying “system thinking”. This is due to the rejection that I have witness among managers to anything that resembles “science”. I believe it is because they perceive it as too theoretical. Have you experienced something similar?
In any case, as many reputed consultants in product development often points out, what you will read about here is nothing but an application of the scientific method to solve problems. When it comes to data and how it should support decision making processes (continuous improvement), I have noticed that it works well to refer to metrics vs telemetry.
In summary, when I try to explain the need to invest in product delivery performance metrics to support decision making processes, using a continuous improvement approach, I simplify the message as described in the picture, which I often refer as following a top-down vs a bottom-up approach. Not the best label, but easy to remember and communicate, specially to executives.
Five not-so-simple steps
I rarely have more than one shot to justify the rationale behind the approach I will describe to apply a data driven improvement kata using software product delivery performance metrics as guidance so I have structured my approach in 5 “simple steps”:
Model your delivery process.
Create a mathematical construct.
Measure and analyze.
Apply a data driven improvement kata to optimize the delivery process.
Add complexity to approach reality. Go back to point 1.
1. Model your system
Like scientists do, we should start our approach with a simple model, the simplest possible abstraction of out delivery process that can provide us context and enough valuable information to start with. If we cannot understand the behavior and performance of a simplified abstraction of our delivery process, we will not understand the real process which is significantly more complex.
What is the simplest model we can use to describe our delivery process? I start with this one…
It depends on how advance the automaker is in the adoption of a software mindset as well as the organizational structure, the development teams might have and end-to-end ownership of the software shipped in the vehicles or they release it (hand-over) to those in charge of the deployment in production (production in the factory including end-of-line testing). There are other possible actors involved. To simplify, we will assume that the delivery process ends with a release process to factory.
The delivery process begins once the software is developed, that is, when the developer considers she is finished and produces a merge request to a repository which is part of the product inventory (repository that contains the software that ends in the product or that is necessary to product the product).
With the above in mind, we can model our delivery process as…
Now that we have described our delivery process in the simplest way possible, we will go over the rest of the steps. The goal is to use this model to describe the process and work with it. Once we understand it and we are able to improve it, we can add complexity.
2. Create a mathematical construct
We will use math to explain our the behavior of our system and the performance of the involved process. Our mathematical construct will be formed by three metrics, which are:
Throughput and stability characterize the performance of our model (delivery process) in a simple way, they are easy to understand and, as we will see later on, in the coming article, they work well for simple and complex models.
There are plenty of engineering disciplines where stability is a popular metric. Software product professionals familiar with agile know these concepts from manufacturing for instance. Those of you familiar with physics or engineering areas where physical systems manage some liquids flow or discrete elements, like networking or water supply systems, are already familiar with the concept of throughput.
The goal of this article is not to go deep into these metrics. I will only provide the minimum level of detail to understand them. At the end of the article I provide several references if you are interested in further details and justifications. I strongly recommend you to go over them.
Wikipedia mentions that Cost of delay combines an understanding of value with how that value leaks away over time. It is “a way of communicating the impact of time on the outcomes we hope to achieve.”
“If you only quantify one thing, quantify the cost of delay.” – Don Reinertsen
This series will not concentrate on this metric. CoD does not characterize our delivery process although it is related to the other two metrics. CoD refers to the entire product life cycle.
Why did I mentioned it then?
I always recommend to add a business related metrics to whatever software product production process performance metrics you use for, at least, the following reasons:
Relates product level decisions with business impact as well as business decisions with impact at product level.
It relates business decisions and their impact with engineering activities and improvements.
Software developers take better decisions when they understand their impact in the business.
It helps executives to understand the profound benefit that decentralization have when it comes to many type of decisions.
It works as antidote against old school managers.
There are other popular business metrics that could also be used, by the way.
Dave Farley mentioned in one of his videos, published a few months back, the following: “Stability measures the quality of the output. It refers to building the thing right.” and ” Throughput measures the efficiency with which we produce the output. It refer to building the right thing.”
Stability and Throughput as metrics are becoming increasingly popular, specially after the publication of the book Accelerate, which summarizes the findings of the State of DevOps Report up to 2019. You can find this reference in the Reads section of this site. I mentioned in a previous post why I believe they are the right ones for the job.
The measures that describe these two metrics will be:
Change Failure Rate provides an idea of the changes (input) that did not lead to any deployment or release (output) so require remediation actions.
Failure recovery time provide an idea of the time required to detect and remediate a failed change to a new deployment or release (output) is provided.
Lead time provides an idea of the time that a change (merge or pull request – input) takes to produce a deployment or release (output).
Frequency provides an idea of how often changes are released. Time interval is the inverse of frequency. We will use it for practical reasons instead of frequency.
One interesting point about measures are the units. These measures have simple units. This is one of the reasons for using Time Interval, for instance, instead of frequency.
The units are also easily adaptable to delivery processes related with different industries or types of products. For instance, lead times are very different for a mobile app compared to a operating system for an automotive infotainment platform.
Looking for simplicity, we will use averages and standard deviations as the way to characterize the data sets from our measurements, taken over a period of time. Obviously this will not be the case for the Failure Rate.
My recommendation is that you start with these two metrics and, step by step, introduce further ones if required. Remember that less but meaningful is more, and that, at scale, the hard part is to drive changes using the metrics as a part of a continuous improvement culture. In other words, people over metrics, tools, technology…
3. Measure and analyze
3.1 Quantitative analysis
In our simplified model, the different measures should not be hard to identify. When dealing with the input, we will need to dig into our SCM to find out when the code is submitted (MR). When dealing with the output, we will identify in the binary blobs storage tool when the right binary is tagged as release. Those two events will be all we need to define the measures that will define our metrics.
Be careful with time zones and error propagation during conversions. If you have access to data scientists or somebody with a strong competence in math, ask for advice. Please consider that our model will increase in complexity and other people might be in charge of extracting the data for you. At some point in time you will use tools to extract, process and plot the data in real time that will need to be configured properly so, despite being simple to identify, you should describe how to extract the data and convert it to the right units in detail.
Validate your results and their usefulness with simple tooling before automating the process using more complex tools. Do not be ashamed of using something like spreadsheets at first or something even simpler.
Plot the data using a timescale in the X-axis that is appropriate for your environment. It is hard to guess which one will be the most useful range so I suggest you to try different options. For simple applications that range might be weeks while in the case of an automotive platform it might be months or even quarters. Different visualizations might lead you to slightly different conclusions so explore your options.
Once you are able to get the right data in a reliable and consistent way, so you can determine your Throughput and Stability for any point in time, you will be ready to go for further quantitative analysis.
Instead of paying too much attention at the beginning to absolute values of the different measures, look for trends in the curves and what happened around:
key events for your product or organization.
dates when key actions were implemented in your delivery process.
Try to learn about the behavior of your delivery process and its performance before getting too deep into trying to improve it. Having conversations with the teams instead of taking early conclusions is what I recommend.
But in order to communicate the current state of your delivery process as well as the target conditions across the entire organization, you will need to turn your quantitative analysis into a qualitative one.
3.2 Qualitative analysis
The first step to move from a quantitative to a qualitative analysis is to define scales. For each measure we will define a scale. We will assign a label to each range of values from that scale. To keep the scale simple, we will define each scale based on a single threshold.
Which value should be used as threshold? Choose one that makes sense for your organization. Check the values and choose one that will make the rest of the qualitative analysis meaningful.
As you can see, the chosen labels are High and Low for our simplest scale, based on a single threshold (value). Once you have defined the thresholds and the scales for each measure, it is time to define scenarios, based on these scales you just have created.
Following the scenarios published by Steve Smith in his book “Measuring Continuous Delivery” I created the representations below to summarize them. I strongly recommend you read this book. Many of the ideas you will see here and in the next article were either taken from or inspired by it.
Based on the quantitative analysis and the scales that you should do, it must be possible to identify which scenario correspond to the delivery process of your software product life cycle. Such scenarios allows to build a common understanding about the delivery process status across the entire organization, where to improve it as well as the target scenario.
Take the above scenarios (on the first column) as examples. Some might not be relevant to you. At the same time, try to describe each one of them in simple words so the entire workforce understand them.
You have now all the tools to describe the performance of your delivery process, both quantitatively and qualitatively, as well a simple way to describe your current and target condition (where you want to head to). This coming last step is about how to get there from one to the other one.
4. Apply a data driven improvement kata to optimize the delivery process.
How do we move (improve) from our current scenario to the target one? Data driven improvement kata is the answer.
Once you introduce metrics in an organization, there is a high risks to fire up a race among different groups at different levels to make their own interpretations of the data to improve specific parts of the process locally. This behavior frequently lead to incompatible measures taken in different parts of the system instead of improving the metrics as expected. Defining a coordinated improvement kata including a shared understanding of the current scenario (condition) is a must.
In the same way that it was not my intention to go over details about metrics, it is far from my intention to go over details about an improvement kata. Please check the reference section to learn more about it. A few things though are important to mention.
The steps of the improvement kata are, essentially:
Understand the direction or challenge.
Grasp the current condition.
Establish the next target condition.
PDCA cycle to experiment towards the target condition.
I suggest to include in the first step of the kata the business and product success picture for the current cycle. Usually the business cycle is a year. The product could be a month, two, a quarter… depending on the product and organization. The experimentation cycle should be restricted to a single sprint. It it takes longer, slice the experiment so it fits in a single sprint.
A data driven improvement kata requires to describe the current scenario, the target one, the hypothesis of the experiments and their results based on the metrics from the mathematical construct (better) or proxy ones. For our example we will assume a simple organization that works in short sprints. Our improvement kata then should be structured using such cadence.
I like to represent any improvement kata through boards, as a summary. Here is a simple one you can start with. These boards should be visible and understood by the entire workforce. Treat them as a living document.
We will be able to learn a lot about our delivery process with this simple model. We will also improve the delivery process in several iterations, but at some point a richer model will be required. It will be then time to get one step closer to reality.
6. Add complexity to approach reality. Go back to point 1.
We have justified the relevance to start simple when evaluating the performance of our delivery process. We have created the simplest possible model to start our analysis from. We described such model as well as a mathematical construct to characterize it. Some considerations were provided about how to perform the measurements and plot the results as part of a quantitative analysis.
We learned how to move from a quantitative to a qualitative analysis and why. Once the qualitative analysis done, we defined a data driven improvement kata to improve the performance of our delivery process iteratively. Such kata is summarized in a simple board.
In essence, this is a process any organization can follow in order to improve de performance of the delivery process effectively. If you are not able to say out loud what has been your Throughput and Stability the past quarter, last month, yesterday, today… your delivery process is not under control. In such case, it is hard to imagine that you will be able to improve it in a meaningful way.
There are countless references to consider, but most (if not all) ideas from this and the coming article are taken or summarized in the following references. Some of them are included in the Reads section of this site. The main references are:
This is the second of a series of two articles describing the idea of correlating software product delivery process performance metrics with community health and collaboration metrics as a way to engage execution managers so their teams participate in Open Source projects and Inner Source programs. The first article is called If you want to go far, together is faster (I). Please read it before this post if you haven’t already. You can also watch the talk I gave at InnerSource Commons Fall 2020 that summarizes these series.
In the previous post I provided some background and described my perception about what causes resistance from managers to involve their development teams in Open Source projects and Inner Source programs. I enumerated five not-so-simple steps to reduce such resistance. This article explain such steps in some detail.
Let me start enumerating again the proposed steps:
1.- In addition to collaboration and community health metrics, I recommend to track product delivery process performance ones in Open Source projects and Inner Source programs. 2.- Correlate both groups of metrics. 3.- Focus on decisions and actions that creates a positive correlation between those two groups of metrics. 4.- Create a reporting strategy to developers and managers based on such positive correlation. 5.- Use such strategy to turn around your story: it is about creating positive business impact at scale through open collaboration.
The solution explained
1.- Collaboration and community health metrics as well as product delivery process performance metrics.
Most Open Source projects and Inner Source programs focus their initial efforts related with metrics in measuring collaboration as well as community healthiness. There is an Open Source project hosted by the Linux Foundation focused on the definition of many types of metrics. Collaboration and community health metrics are among the more mature ones. The project is called CHAOSS. You can find plenty of examples of these metrics applied in a variety of Open Source projects there too.
Inner Source programs are taking the experience developed by Open Source projects in this field and apply them internally so many of them are using such metrics (collaboration and community health) as the base to evaluate how successful they are. In our attempt to expand our study of these collaboration environments to areas directly related with productivity, efficiency etc., additional metrics should be considered.
Before getting into the core ones, I have to say that many projects pay attention to code review related metrics as well as defect management to evaluate productivity or performance. These metrics go in the right direction but they are only partial and, in order to demonstrate a clear relation between collaboration and productivity or performance for instance, they do not work very well in many cases. I will put a few examples why.
Code review is a standard practice among Open Source projects, but at scale is perceived by many as an inefficient activity compared to others, when knowledge transfer and mentorship are not a core goal. Pair or mob programing as well as code review restricted to team scale are practices perceived by many execution managers as more efficient in corporate environments.
When it comes to defect management, companies have been tracking these variables for a long time and it will be very hard for Open Source and Inner Source evangelists to convince execution managers that what you are doing in the open or in the Inner Source program is so much better ans specially cheaper than it is worth participating. For many of these managers, cost control goes first and code sustainability comes later, not the other way around.
Unsurprisingly, I recommend to focus on the delivery process of the software product production as a first step towards reducing the resistance from execution managers to embrace collaboration at scale. I pick the delivery process because it is deterministic, so it is simpler to apply process engineering (so metrics) than to any other stage of the product life cycle that involves development. From all the potential metrics, throughput and stability are the essential ones.
Throughput and Stability
It is not the point of this article to go deep into these metrics. I suggest you to refer to delivery or product managers at your organization that embrace Continuous Delivery principles and practices to get information about these core metrics. You can also read Steve Smith’s book Measuring Continuous Delivery, which defines the metrics in detail, characterize them and provide guidance on how to implement them and use them. You can find more details about this and other interesting books at the Reads section of this site, by the way.
There are several reasons for me to recommend these two metrics. Some of them are:
Both metrics characterize the performance of a system that processes a flow of elements. The software product delivery can be conceive as such a system where the information flows in the form of code commits, packages, images… .
Both metrics (sometimes in different forms/expressions) are widely used in other knowledge areas, in some cases for quite some time now, like networking, lean manufacturing, fluid dynamics… There is little magic behind them.
To me the most relevant characteristic is that, once your delivery system is modeled, both metrics can be applied at system level (globally) and at a specific stage (locally). This is extremely powerful when trying to improve the overall performance of the delivery process through local actions at specific points. You can track the effect of local improvements in the entire process.
Both metrics have simple units and are simple to measure. The complexity is operational when different tools are used across the delivery process. The usage of these metrics reduce the complexity to a technical problem.
Throughput and Stability are positively correlated when applying Continuous Delivery principles and practices. In addition, they can be used to track how good you are doing when moving from a discontinuous to a continuous delivery system. Several of the practices promoted by Continuous Delivery are already very popular among Open Source projects. In some cases, some would claim that they were invented there, way before Continuous Delivery was a thing in corporate environments. I love the chicken-egg debates… but not now.
Let’s assume from now on that I have convinced you that Throughput and Stability are the two metrics to focus on, in addition with the already in use collaboration and community health metrics your Open Source or Inner Source project is already using.
If you are not convinced, by the way, even after reading S. Smith book, you might want to check the most common references to Continuous Delivery. Dave Farley, one of the fathers of the Continuous Delivery movement, has a new series of videos you should watch. One of them deals with these two metrics.
2.- Correlate both groups of metrics
Let’s assume for a moment that you have implemented such delivery process metrics in several of the projects in your Inner Source initiative or across your delivery pipelines in your Open Source project. The following step is to introduce an Improvement Kata process to define and evaluate the outcome of specific actions over prestablished high level SMART goals. Such goals should aim for a correlation between both types of metrics (community health / collaboration and delivery process ones).
Let me put one example. It is widely understood in Open Source projects that being welcoming is a sign of good health. It is common to measure how many newcomers the project attract overtime and their initial journey within the community, looking for their consolidation as contributors. A similar thinking is followed in Inner Source projects.
The truth is that not always more capacity translate into higher throughput or an increase of process stability, on the contrary, it is a widely accepted among execution managers that the opposite is more likely in some cases. Unless the work structure, so the teams and the tooling, are oriented to embrace flexible capacity, high rates of capacity variability leads to inefficiencies. This is an example of an expected negative correlation.
In this particular case then, the goal is to extend the actions related with increasing our number of new contributors to our delivery process, ensuring that our system is sensitive to an increase of capacity at the expected rate and we can track it accordingly.
What do we have to do to mitigate the risks of increasing the Integration failure rate due to having an increase of throughput at commit stage? Can we increase our build capacity accordingly? Can our testing infrastructure digest the increase of builds derived from increasing our development capacity, assuming we keep the number of commits per triggered build?
In summary, work on the correlation of both groups of metrics, so link actions that would affect both, community health and collaboration metrics together with delivery metrics.
3.- Focus on decisions and actions that creates a positive correlation between both groups of metrics.
There will be executed actions designed to increase our number of contributors that might lead to a reduction of throughput or stability, others that might have a positive effect in one of them but not the other (spoiler alert, at some both will decrease) and some others that will increase both of them (positive correlation).
If you work in an environment where Continuous Delivery is the norm, those behind the execution will understand which actions have a positive correlation between throughput and stability. Your job will only be associated to link those actions with the ones you are familiar with in the community health and collaboration space. If not, you work will be harder, but still worth it.
For our particular case, you might find for instance, that a simple measure to digest the increasing number of commits (bug fixes) can be to scale up the build capacity if you have remaining budget. You might find though that you have problems doing so when reviewing acceptance criteria because you lack automation, or that your current testing-on-hardware capacity is almost fixed due to limitations in the system that manage your test benches and additional effort to improve the situation is required.
Establishing experiments that consider not just the collaboration side but also the software delivery one as well as translating into production those experiments that demonstrate a positive correlation of the target metrics, increasing all of them, might bring you to surprising results, sometimes far from common knowledge among those focused on collaboration aspects only, but closer to those focused in execution.
4.- Create a reporting strategy to developers and managers based on such positive correlation.
A board member of an organization I was managing, once told me what I follow ever since. It was something like…
“Managers talk to managers through reports. Speak up clearly through them.“
As manager I used to put a lot of thinking in the reporting strategy. I have some blog posts written about this point. Beside things like the language used or the OKRs and KPIs you base your reporting upon, understanding the motivations and background of the target audience of those reports is as important.
I suggest you pay attention to how those you want to convince about participating in Open Source or Inner Source projects report to their managers as well as how others report to them. Are those report time based? KPIs based, are they presented and discussed in 1:1s or in a team meeting? etc. Usually every senior manager dealing with execution have a consolidated way of reporting and being reported. Adapt to it instead of keeping the format we are more used to in open environments. I love reporting through a team or department blog but it might not be the best format for this case.
After creating and evaluating many reports about community health and collaboration activities, I suggest to change how they are conceived. Instead of focusing on collaboration growth and community health first and then in the consequences of such improvements for the organization (benefits), focus first on how product or project performance have improved while collaboration and community health has improved. In other words, change how cause-effect are presented.
The idea is to convince execution managers that by anticipating in Open Source projects or Inner Source programs, their teams can learn how to be more efficient and productive in short cycles while achieving long term goals they can present to executives. Help those managers also to present both type of achievements to their executives using your own reports.
For engineers, move the spotlight away from the growth of interactions among developers and put it in the increase of stability derived from making those interactions meaningful, for instance. Or try to correlate diversity metrics with defects management results, or with reductions in change failure rates or detected security vulnerabilities, etc. Move partially your reporting focus away from teams satisfaction (a common strategy within Open Source projects) and put it in team performance and productivity. They are obviously intimately related but tech leads and other key roles within your company might be more sensitive to the latter.
In summary, you achieve the proposed goal if execution managers can take the reports you present to them and insert them in theirs without re-interpreting the language, the figures, the datasets, the conclusions…
5.- Turn your story around.
If you manage to find positive correlations between the proposed metrics and report about those correlations in a way that is sensitive for execution managers, you will have established a very powerful platform to create an unbeatable story around your Inner Source program or your participation at Open Source projects. Investment growth will receive less resistance and it will be easier to infect execution units with practices and tools promoted through the collaboration program.
Prescriptors and evangelists will feel more supported in their viral infection and those responsible for these programs will gain an invaluable ally in their battle against legal, procurement, IP or risks departments, among others. Collaboration will not just be good for the developers or the company but also clearly for the product portfolio or the services. And not just in the long run but also in a shorter term. That is a significant difference.
Your story will be about increasing business impact through collaboration instead of about collaborating to achieve bigger business impact. Open collaboration environments increase productivity and have a tangible positive impact in the organization’s product/service, so it has a clear positive business impact.
In order to attract execution managers to promote the participation of their departments and teams in Open Source projects and Inner Source programs, I recommend to define a different communication strategy, one that rely in reports based on results provided by actions that show a positive correlation between community health and collaboration metrics with delivery process performance metrics, especially throughput and stability. This idea can be summarized in the following steps, explained in these two articles:
Collaboration within a commercial organization matters more to management if it has a measurable positive business impact.
To take decisions and evaluate their impact within your Inner Source program or the FLOSS community, combine collaboration and community health metrics with delivery metrics, fundamentally throughput and stability.
Prioritize those decisions/actions that produce a tangible positive correlation between these two groups of metrics.
Report, specially to managers, based on such positive correlation.
Adapt your Inner Source or Open Source story: increase business impact through collaboration.
In a nutshell, it all comes down to prove that, at scale…