Improve your software product delivery process performance using metrics (II)

This is the second article of the series. Please read the first post before this one.

During the previous article I explained the process to follow, using the simplest possible model to describe a software product delivery process, to measure and improve its performance, following a data driven improvement kata as a way to promote a continuous improvement culture .

Despite providing extremely valuable information, once we have gone through the described process for a few iterations, the limitations of such a simple model will become evident. We will need to add complexity into our model, getting closer to the real software product delivery process.

Add complexity to approach reality.

If you have a clear map of your real delivery process, you can skip this step. If you do not, the first thing I recommend is to run a value stream workshop to discover and describe your delivery process. Visualizing it as a group is a great exercise, not just for those involved in the workshop, but also for the rest of the organiza­tion. If you are not able to draw the flow of the code end to end, identifying what happens where and who is responsible for what, trying to measure the performance of the process is often pointless.

How should the model be enriched?

For our purpose, we do not need a great level of detail as outcome of the value stream workshop, just enough to create your new model. Let’s assume for now that one of the outcomes of such workshop is a drawing like the one below, which I took from an exercise for an automotive platform. Only the “green criti­cal path” is shown here. I hope you get an idea of what is needed.

Based on this diagram, you can see clearly that the delivery process can be divided in three or four stages. Try to do it in three, again, for simplicity. For this example I will divide it in four since, in automotive, the validation/verification stage is rele­vant in terms of effort, complexity, throughput and specially stability.

So our richer model can be expressed in the following way:

This new model is a better approximation to our real delivery process.

Mathematical construct and quantitative analysis

Once we have described our system using a richer model, the question is, do we need to change our mathematical construct?

As you can guess, the answer is no, we do not. We just need to enrich it. The construct used to characterize the simplified model would have been very limited if now we would have needed to modify it.

Again, Steve Smith provides in his book “Measuring continuous Delivery” what we need to extend the metrics. The overall idea is to use the same metrics and measures for the entire process (as we did with the simpler model) and for each stage (corresponding to this new model). This is essentially the case for any linear process, like the one we created. S. Smith refers to these “extensions” of the metrics and measures as indicators.

It becomes now harder to define the events to extract the data sets from since the data probably needs to be extracted from different tools. I recommend to invest time in describing these events, how the data sets are extracted from the affected tools, how to convert such data into the right units… You will also need to describe accurately which events determine the beginning and end of each stage. All this effort will allow you to iterate in the process creating more complex models.

The methodology to measu­re, process, plot and analyze the different data-sets will be analogous to the one described in the first post of this series. The only difference is that now it should be done for each stage, in addition to the overall process.

Qualitative analysis

It is easy now to realize why we tried to keep our scale simple. The number of combi­nations we need to work with increa­ses now. The scenarios for each stage are consistent with those described for the previous (simpler) model, so the qualitative analysis done for both models complement each other.

The definition of the scenarios can be personalized and adapted to your needs. Since you will also use those scenarios as a communication tool, make sure they are described in away that resonates to your workforce. Reduce their number to those that make sense to you and ignore the others.

You should execute the same qualitative analysis for each stage of the delivery process to describe in simple words the scenario that better describes the current performance of your delivery process as well as the target scenario. Please remember that the goal is to improve the performance of the overall process not to optimize any specific stage. If an improvement done in a specific stage does not have a positive impact in the overall process, you need to question the consolidation of such improvement.

Data Driven Improvement Kata

This step will require changes. Now that our system model is more complex, the methodology to improve performance of the delivery process will need to consider the company organi­zational structure as well as the cycles in which the business, product and development teams operate.

At scale, these katas cannot be structured in a single cycle, but in several ones. The book “Leading The Transformation. Applying Agile and DevOps principles at scale” by Gary Gruver and Tommy Mouser provides an idea of how to approach this continuous improvement process for larger organizations.

I suggest to consider two cycles whenever you can. In my example (very academic, to explain the process) I have consided three: business cycle, product cycle and experiments cycle.

Please check the references (at the end of the first article) to learn more about improvement kata. I will make some considerations about the above example for those of you less familiar with continuous improvement methodologies:

  • The overall idea is to synchronize the three cycles.
  • Business cycles are usually a year long. I would structure the business iterations in quarters.
  • Define the business goals for the product and relate them both, qualitatively / qualitatively with the metrics / scenarios.
  • Define the current scenario and the target scenario for the current quarter. Be concise.
  • Define the most suitable cycles for your product. Bare in mind that some effort in analysis, coordination and communication with the workforce will be required on each iteration so too short cycles might bring unnecessary overhead.
  • If the development teams work in weekly sprints, I would start with monthly iterations for the product level. If engineering teams work with two weeks sprints, then consider iterations of six weeks at product level.
  • As described in the previous article for the simpler model, I will use the boards to explain how the iterations are defined.

The first board correspond to the business cycle, the second one to the product and the third one to the technical experiments to be performed by development teams. Click the picture to enlarge and read the advises.

Summary

In this second article, we have enriched our model introducing more complexity (four stages). We saw how the mathematical construct should be extended to characterize each stage, which allow us to get a finer grain information, increasing the accuracy of our analysis. We have seen that Throughput and Stability remain useful as key metrics, as expected.

We have also discovered that, as it happe­ned with our quantitative analysis, the qualitative one is essentially an extension of the one introduced in our previous article for our simpler model. We have modified our approach to continuous improvement adapting the data driven improvement kata to a more complex environment. The proposed example. covers, in my view, large organiza­tions. Such data driven improvement kata has been summarized in three boards, one per level: business, product and technical.

Conclusion

These two articles represent a modest attempt to summarize and structure in 5 basic steps the process that an organization should go through to improve its delivery process performance at scale using Throughput and Stability as guiding lights.

Obviously writing about this topic is easier than implementing it so I am looking forward to read about your own experience and how it contradicts, modify, complements or support the described methodology and advises.

Embrace Open Source culture: the 5 common transformations.

Article originally published at Linkedin on May 15th 2016.

This is a story of what I have lived or witnessed a few times so far. A story of an organization that used to consume, develop and ship proprietary software for many years. At some point in time, management took the decision of using Open Source. Like in most cases, the decision was forced by its customers, providers, competitors… and by numbers.

A painful but unavoidable transformation was required.

1.- Open Source consumer

Engineers had to learn a new system, adapt or re-write those features that used to made the organization unique, together with many other painful actions. It was expensive at the beginning but, due to the cost reduction in licenses and the change that Linux represented in the relation with providers, in a few years it was clearly worth it. And if it wasn’t, it didn’t matter since it was what the market demanded. There was no way back

Every software organization has gone through their unique journey, but the final sentence of the story has been the same for all of them: they became Open Source consumers.

2.- Open Source producer

This organization gained control over its production and, by consuming Open Source, it could focus many resources in differentiation, without changing the structure, development and delivery processes. At some point, it was shipping products that involved a significant percentage of generic software taken “from internet”.

It became an Open Source producer.

You can recognise such organizations because they frequently create a specific group, usually linked to R&D, in change of bringing all the innovation that is happening “in the Open Source community” into the organization.

Little by little this organisation realised that giving fast and satisfactory answers to their customer demands became more and more expensive. They got stuck in what rapidly became an old kernel or tool chain version…. Bringing innovation from “the community” required back-porting, solving complex integration issues, incompatibilities with what your provider brings, what your customer wants.

So they have to upgrade.

This organization will be able now to take advantage of all the common features and compatibility that the new kernel, the new tool chain… brings. But, guess what, forward porting all the differentiation features this organization has developed, all the bug fixes, is so much work and so complicated that the challenge put the organization at risk.

3.- Open Source contributors

The organization feels now the downsides of becoming a blind Open Source consumer and producer. Execs feels like when a bubble explodes and they are inside of it. They has less control than they thought, which turns out to be expensive, and what is worse, they lack the expertise within the organization to gain it….

But, after struggling for some time, this organization survived, which means that it has learned some lessons:

  • Upstream those features that are not differentiation factors any more.
  • Increase the investment in that reduced groups of rock-stars that are up to date of what’s going on “in the different communities”.
  • Invest in those Open Source projects that develop the key software you consume.
  • Even better, start your own Open Source project to promote your technologies and be perceived as a leader…
  • Reduce the upgrade cycle, so the “upgrade pain” is lower. As a side effect, the organization has the opportunity to increase the cash flow when doing two smaller upgrades instead of a big one. At the very end, the real profit comes with the first update, not with the new version, right?

This organization ended up upstreaming features when they could, normally very late, because “they do not have time”, frequently assigning that task to young inexperienced developers or, even “better”, subcontracting it, which is “cheaper”.

You can recognise that this organization has gone through the described process when attending to conferences given by any of its executives. They cannot stop talking about how much they contribute to this and that community, about how awesome the community is, how important it is to be open and share…. they are referring all the time to communities/upstream as “us” and “them”. They think they got it, they really do.

Most of them believe they are in the crest of the wave after going through this third transformation process. They are innovative, they has been able to reduce their time to market, they are gaining reputation within a variety of communities… They are not just Open Source consumers and producers any more. They are also contributors. Some of them even heavy and “successful” contributors.

But if you look closer, they have not adopted “the Open Source way”.

This organization keeps their traditional processes intact. It is managed in the same way. Decision processes are taken like when it was a proprietary company, it has not improved transparency significantly, it does not share code and practices among departments…. there is a totally different reality in front and behind its firewall, between production and R&D, between management, engineering, customer support, etc..

This organization face friction because of this reality. It still cannot move fast enough. Upgrading is still too expensive since now they have to do it more often than years ago, upstreaming goes so slow, when it happens. They cannot control the communities they are investing on…

So going through a forth transformation becomes unavoidable. Some refer to this transformation as “upstream first”.

4.- Upstream first or becoming a good Open Source citizen

This fourth transformation basically means that upstreaming is part of your development process, not an aside task. It also means that communities are part of your delivery strategy, not an after market topic, that R&D is a two way road where you do not just consume innovation created by others but you share yours, not just “promote it”. You really need to get involved.

This organization will learn that by becoming more open, their engineers learn more and faster, so the organization itself. It is at this stage where the organization really understand where the real value is in the software they produce compared to what is commodity…or that is what its executives and managers most likely believe, once again. 🙂

But open source (no capital letters any more) is not about being open, but about being transparent, which means that is not just about seeing what is behind the glass, but also understand it.

I believe the fictional organization I am talking about will have to take one more step, the fifth one. It will be about “becoming upstream”.

5.- Becoming upstream or being an open source organization

This is about understanding that, if you consume, produce and contribute Open Source, the smart thing to do is becoming an open source company. I think it is naive to pretend taking full advantage of  Open Source while keeping your traditional corporate culture, which collides with the one of those who produces most of the software you consume and ship, who are your “upstream”. You are building your business on top of them. Since you cannot control them, become “them”.

The smart thing to do is to surf the wave, not fight against it, generating friction. Any manager knows that friction is expensive, reduces focus and drives away talent.  It is bad for the business.

The required culture change to succeed in this fifth transformation involves thinking less about us (company and customers) and them (community), and more about us (ecosystem). It can’t be any more about upstream and downstream but about technology and service. It has to be less about upgrading and more about updating, less about “manage” and more about “lead” at every level of the organization, not just referred to execs and managers.

It is a transformation in which engineers are empowered, where management is more focused on collecting information for execs instead of producing it, and after decisions are taken, their key focus is alignment. A transformation in which execs get closer to where the real value is, to people, because they are the “masters”. A transformation in which engineers not just follow, they get exposed, they take responsibility and assume the consequences… getting paid for it.

An environment in which accessing to key information does not depend on your position within the organization chart, which means that power does not depend so much  on what others ignore, but decisions are taken based on shared knowledge. A culture in which transparency is the norm not the exception.

In summary, a transformation that leads to a stage in which the organization steers its ecosystem instead of driving it. So it leads it in a sustainable way.

A quimera?

I understand it might sound like a quimera, but:
  1. No more than it would have sounded 15 years ago any of the stories that so many CEOs or Open Source Program managers from leading corporations are telling nowadays in popular FOSS events about “their transformation”.
  2. I do not think the debate is if this fifth transformation will be needed, but about when and how to go through it.
  3. My +10 years of experience in Open Source and +17 as manager tells me that, waiting to face any of the first four transformation processes until you have no choice is an unnecessary risk. I suspect the same will apply to the fifth one.
So my message is,
  1. Consume, produce and contribute to open source being a good citizen.
  2. Embrace Open Source culture… better sooner than later.

 

Business around AGPL software? Oh yeah…..

I’ve heard many times to executives say that AGPL is a license incompatible with business. This is a well spread idea specially in the web world.

A few years ago a friend of mine, a visionary, Gonzalo Aller, from Fotón SI, the oldest Spanish Free Software company,  told me that AGPL is the GPL of the web world. He tried to convince me to put that license in a web project I was involved with. I thought he was crazy, like many times before (and later). I was one of the ignorants who used to see AGPL as business unfriendly.

Like many times before, little by little, I changed my mind and I got convinced, once again, that license define conditions but never determine business. Business is more related to having good ideas, taking smart decisions, execute them correctly and put your hart on it, and not so much about license.

This statement is backed up by many examples. The following projects:

  • CiviCRM
  • Zarafa
  • Gitorious
  • Launchpad
  • ProcessMaker

and many more are AGPL based Libre Software projects run by companies that build services around the product, successfully.

We have a new example, announced just today. OwnCloud Inc is born to support OwnCloud, a KDE project based on AGPL. Good luck and happy business!