If you want to go far, together is faster (II).

This is the second of a series of two articles describing the idea of correlating software product delivery process performance metrics with community health and collaboration metrics as a way to engage execution managers so their teams participate in Open Source projects and Inner Source programs. The first article is called If you want to go far, together is faster (I). Please read it before this post if you haven’t already. You can also watch the talk I gave at InnerSource Commons Fall 2020 that summarizes these series.

In the previous post I provided some background and described my perception about what causes resistance from managers to involve their development teams in Open Source projects and Inner Source programs. I enumerated five not-so-simple steps to reduce such resistance. This article explain such steps in some detail.

Let me start enumerating again the proposed steps:

1.- In addition to collaboration and community health metrics, I recommend to track product delivery process performance ones in Open Source projects and Inner Source programs.
2.- Correlate both groups of metrics.
3.- Focus on decisions and actions that creates a positive correlation between those two groups of metrics.
4.- Create a reporting strategy to developers and managers based on such positive correlation.
5.- Use such strategy to turn around your story: it is about creating positive business impact at scale through open collaboration.

The solution explained

1.- Collaboration and community health metrics as well as product delivery process performance metrics.

Most Open Source projects and Inner Source programs focus their initial efforts related with metrics in measuring collaboration as well as community healthiness. There is an Open Source project hosted by the Linux Foundation focused on the definition of many types of metrics. Collaboration and community health metrics are among the more mature ones. The project is called CHAOSS. You can find plenty of examples of these metrics applied in a variety of Open Source projects there too.

Inner Source programs are taking the experience developed by Open Source projects in this field and apply them internally so many of them are using such metrics (collaboration and community health) as the base to evaluate how successful they are. In our attempt to expand our study of these collaboration environments to areas directly related with productivity, efficiency etc., additional metrics should be considered.

Before getting into the core ones, I have to say that many projects pay attention to code review related metrics as well as defect management to evaluate productivity or performance. These metrics go in the right direction but they are only partial and, in order to demonstrate a clear relation between collaboration and productivity or performance for instance, they do not work very well in many cases. I will put a few examples why.

Code review is a standard practice among Open Source projects, but at scale is perceived by many as an inefficient activity compared to others, when knowledge transfer and mentorship are not a core goal. Pair or mob programing as well as code review restricted to team scale are practices perceived by many execution managers as more efficient in corporate environments.

When it comes to defect management, companies have been tracking these variables for a long time and it will be very hard for Open Source and Inner Source evangelists to convince execution managers that what you are doing in the open or in the Inner Source program is so much better ans specially cheaper than it is worth participating. For many of these managers, cost control goes first and code sustainability comes later, not the other way around.

Unsurprisingly, I recommend to focus on the delivery process of the software product production as a first step towards reducing the resistance from execution managers to embrace collaboration at scale. I pick the delivery process because it is deterministic, so it is simpler to apply process engineering (so metrics) than to any other stage of the product life cycle that involves development. From all the potential metrics, throughput and stability are the essential ones.

Throughput and Stability

It is not the point of this article to go deep into these metrics. I suggest you to refer to delivery or product managers at your organization that embrace Continuous Delivery principles and practices to get information about these core metrics. You can also read Steve Smith’s book Measuring Continuous Delivery, which defines the metrics in detail, characterize them and provide guidance on how to implement them and use them. You can find more details about this and other interesting books at the Reads section of this site, by the way.

There are several reasons for me to recommend these two metrics. Some of them are:

  • Both metrics characterize the performance of a system that processes a flow of elements. The software product delivery can be conceive as such a system where the information flows in the form of code commits, packages, images… .
  • Both metrics (sometimes in different forms/expressions) are widely used in other knowledge areas, in some cases for quite some time now, like networking, lean manufacturing, fluid dynamics… There is little magic behind them.
  • To me the most relevant characteristic is that, once your delivery system is modeled, both metrics can be applied at system level (globally) and at a specific stage (locally). This is extremely powerful when trying to improve the overall performance of the delivery process through local actions at specific points. You can track the effect of local improvements in the entire process.
  • Both metrics have simple units and are simple to measure. The complexity is operational when different tools are used across the delivery process. The usage of these metrics reduce the complexity to a technical problem.
  • Throughput and Stability are positively correlated when applying Continuous Delivery principles and practices. In addition, they can be used to track how good you are doing when moving from a discontinuous to a continuous delivery system. Several of the practices promoted by Continuous Delivery are already very popular among Open Source projects. In some cases, some would claim that they were invented there, way before Continuous Delivery was a thing in corporate environments. I love the chicken-egg debates… but not now.

Let’s assume from now on that I have convinced you that Throughput and Stability are the two metrics to focus on, in addition with the already in use collaboration and community health metrics your Open Source or Inner Source project is already using.

If you are not convinced, by the way, even after reading S. Smith book, you might want to check the most common references to Continuous Delivery. Dave Farley, one of the fathers of the Continuous Delivery movement, has a new series of videos you should watch. One of them deals with these two metrics.

2.- Correlate both groups of metrics

Let’s assume for a moment that you have implemented such delivery process metrics in several of the projects in your Inner Source initiative or across your delivery pipelines in your Open Source project. The following step is to introduce an Improvement Kata process to define and evaluate the outcome of specific actions over prestablished high level SMART goals. Such goals should aim for a correlation between both types of metrics (community health / collaboration and delivery process ones).

Let me put one example. It is widely understood in Open Source projects that being welcoming is a sign of good health. It is common to measure how many newcomers the project attract overtime and their initial journey within the community, looking for their consolidation as contributors. A similar thinking is followed in Inner Source projects.

The truth is that not always more capacity translate into higher throughput or an increase of process stability, on the contrary, it is a widely accepted among execution managers that the opposite is more likely in some cases. Unless the work structure, so the teams and the tooling, are oriented to embrace flexible capacity, high rates of capacity variability leads to inefficiencies. This is an example of an expected negative correlation.

In this particular case then, the goal is to extend the actions related with increasing our number of new contributors to our delivery process, ensuring that our system is sensitive to an increase of capacity at the expected rate and we can track it accordingly.

What do we have to do to mitigate the risks of increasing the Integration failure rate due to having an increase of throughput at commit stage? Can we increase our build capacity accordingly? Can our testing infrastructure digest the increase of builds derived from increasing our development capacity, assuming we keep the number of commits per triggered build?

In summary, work on the correlation of both groups of metrics, so link actions that would affect both, community health and collaboration metrics together with delivery metrics.

3.- Focus on decisions and actions that creates a positive correlation between both groups of metrics.

There will be executed actions designed to increase our number of contributors that might lead to a reduction of throughput or stability, others that might have a positive effect in one of them but not the other (spoiler alert, at some both will decrease) and some others that will increase both of them (positive correlation).

If you work in an environment where Continuous Delivery is the norm, those behind the execution will understand which actions have a positive correlation between throughput and stability. Your job will only be associated to link those actions with the ones you are familiar with in the community health and collaboration space. If not, you work will be harder, but still worth it.

For our particular case, you might find for instance, that a simple measure to digest the increasing number of commits (bug fixes) can be to scale up the build capacity if you have remaining budget. You might find though that you have problems doing so when reviewing acceptance criteria because you lack automation, or that your current testing-on-hardware capacity is almost fixed due to limitations in the system that manage your test benches and additional effort to improve the situation is required.

Establishing experiments that consider not just the collaboration side but also the software delivery one as well as translating into production those experiments that demonstrate a positive correlation of the target metrics, increasing all of them, might bring you to surprising results, sometimes far from common knowledge among those focused on collaboration aspects only, but closer to those focused in execution.

4.- Create a reporting strategy to developers and managers based on such positive correlation.

A board member of an organization I was managing, once told me what I follow ever since. It was something like…

Managers talk to managers through reports. Speak up clearly through them.

As manager I used to put a lot of thinking in the reporting strategy. I have some blog posts written about this point. Beside things like the language used or the OKRs and KPIs you base your reporting upon, understanding the motivations and background of the target audience of those reports is as important.

I suggest you pay attention to how those you want to convince about participating in Open Source or Inner Source projects report to their managers as well as how others report to them. Are those report time based? KPIs based, are they presented and discussed in 1:1s or in a team meeting? etc. Usually every senior manager dealing with execution have a consolidated way of reporting and being reported. Adapt to it instead of keeping the format we are more used to in open environments. I love reporting through a team or department blog but it might not be the best format for this case.

After creating and evaluating many reports about community health and collaboration activities, I suggest to change how they are conceived. Instead of focusing on collaboration growth and community health first and then in the consequences of such improvements for the organization (benefits), focus first on how product or project performance have improved while collaboration and community health has improved. In other words, change how cause-effect are presented.

The idea is to convince execution managers that by anticipating in Open Source projects or Inner Source programs, their teams can learn how to be more efficient and productive in short cycles while achieving long term goals they can present to executives. Help those managers also to present both type of achievements to their executives using your own reports.

For engineers, move the spotlight away from the growth of interactions among developers and put it in the increase of stability derived from making those interactions meaningful, for instance. Or try to correlate diversity metrics with defects management results, or with reductions in change failure rates or detected security vulnerabilities, etc. Move partially your reporting focus away from teams satisfaction (a common strategy within Open Source projects) and put it in team performance and productivity. They are obviously intimately related but tech leads and other key roles within your company might be more sensitive to the latter.

In summary, you achieve the proposed goal if execution managers can take the reports you present to them and insert them in theirs without re-interpreting the language, the figures, the datasets, the conclusions…

5.- Turn your story around.

If you manage to find positive correlations between the proposed metrics and report about those correlations in a way that is sensitive for execution managers, you will have established a very powerful platform to create an unbeatable story around your Inner Source program or your participation at Open Source projects. Investment growth will receive less resistance and it will be easier to infect execution units with practices and tools promoted through the collaboration program.

Prescriptors and evangelists will feel more supported in their viral infection and those responsible for these programs will gain an invaluable ally in their battle against legal, procurement, IP or risks departments, among others. Collaboration will not just be good for the developers or the company but also clearly for the product portfolio or the services. And not just in the long run but also in a shorter term. That is a significant difference.

Your story will be about increasing business impact through collaboration instead of about collaborating to achieve bigger business impact. Open collaboration environments increase productivity and have a tangible positive impact in the organization’s product/service, so it has a clear positive business impact.

Conclusion

In order to attract execution managers to promote the participation of their departments and teams in Open Source projects and Inner Source programs, I recommend to define a different communication strategy, one that rely in reports based on results provided by actions that show a positive correlation between community health and collaboration metrics with delivery process performance metrics, especially throughput and stability. This idea can be summarized in the following steps, explained in these two articles:

  • Collaboration within a commercial organization matters more to management if it has a measurable positive business impact.
  • To take decisions and evaluate their impact within your Inner Source program or the FLOSS community, combine collaboration and community health metrics with delivery metrics, fundamentally throughput and stability.
  • Prioritize those decisions/actions that produce a tangible positive correlation between these two groups of metrics.
  • Report, specially to managers, based on such positive correlation.
  • Adapt your Inner Source or Open Source story: increase business impact through collaboration.

In a nutshell, it all comes down to prove that, at scale…

if you want to go far, together is faster.

Check the first one of this article series if you haven’t. You can also watch the recording of the talk provided at ISC Fall 2020 where I summarized what is explained in these two articles.

I would like to thank the ISC Fall 2020 content committee and organizers for giving me the opportunity to participate in such interesting and well organized event.

If you want to go far, together is faster (I).

This is the first of a series of two articles describing the reasoning and the steps behind the idea of correlating software product delivery process performance metrics with community health and collaboration metrics as a way to engage execution managers so their teams participate in Open Source projects and Inner Source programs. What you will read in these articles is an extension of a talk I gave at the event Inner Source Commons Fall 2020.

Background

There is a very popular African proverb within the Free Software movement that says…

If you want to go fast, go alone. If you want to go far, go together.

Many of us used it for years to promote collaboration among commercial organizations over doing it internally in a fast way at the risk of reinventing the wheel, not following standards, reducing quality, etc.

The proverb describes an implicit OR relation between the traditional Open Source mindset, focused on longer term results obtained through extensive collaboration, and the traditional corporate mindset, where time-to-market is almost an obsession.

Early in my career I got exposed as manager (I do not code) to agile and a little later to Continuous Delivery. This second set of principles and practices had a big impact on me because of the tight and positive correlation that proposes between speed and quality. Until then, I had assumed that such correlation was negative, when one increases the other decreases or vice-versa.

During a long time, I also assumed unconsciously as truth the negative correlation between collaboration and speed. It was not until I started working in projects at scale when I became aware of such unconscious assumption and start question it first and challenging it later.

In my early years in Open Source I found myself many times discussing with executives and managers about the benefits of this “collaboration framework” and why they should adopt it. Like probably many of you, I found myself being more successful among executives and politicians than middle layer managers.

– “No wonder they are executives” I thought more than once back then.

But time prove me wrong once again.

Problem statement: can we go faster by going together?

It was not until later on in my career, when I could relate to good Open Source evangelists but specially good sales professionals. I learned a bit about how different groups within the same organization are incentivized differently and you need to understand those incentives to tune your message in a a way that they can relate to it.

Most of my arguments and those from my colleagues back then were focused on cost reductions and collaboration, on preventing silos, on shorten innovation cycles, on sustainability, prevention of vendor lock-in, etc. Those arguments resonate very well among those responsible for strategic decisions or those managers directly related with innovation. But they did not work well with execution managers, specially senior ones.

When I have been a manager myself in the software industry, frequently my incentives had little to do with those arguments. In some cases, either my manager’s incentives had little to do with such arguments despite being an Open Organization. Open Source was part of the company culture but management objectives had little to do with collaboration. Variables like productivity, efficiency, time-to-market, customer satisfaction, defects management, release cycles, yearly costs, etc., were the core incentives that drove my actions and those around me.

If that was the case for those organizations I was involved in back then, imagine traditional corporations. Later on I got engage with such companies which confirmed this intuition.

I found myself more than once arguing with my managers about priorities and incentives, goals and KPIs because, as Open Source guy, I was for some time unable to clearly articulate the positive correlation between collaboration and efficiency, productivity, cost reduction etc. In some cases, this inability was a factor in generating a collision that end up with my bones out of the organization.

That positive correlation between collaboration and productivity was counter-intuitive for many middle managers I know ten years ago. Still is for some, even in the Open Source space. Haven’t you heard from managers that if you want to meet deadlines do not work upstream because you go move slower? I’ve heard so many times that, as mentioned before, during years I believed it was true. It might at small scale, but at big scale, it is not necessarily true.

It was not until two or three years ago that I started paying attention to Inner Source. I realized that many have inherited this belief. And since they live in corporate environments, the challenge that represents convincing execution related managers is bigger than in Open Source.

Inner Source programs are usually supported by executives and R&D departments but receive resistance from middle management, especially those closer to execution units. Collaborating with other departments might be good in the long term but it is perceived as less productive than developing in isolation. Somehow, in order to participate in Inner Source programs, they see themselves choosing between shorter-term and longer-term goals, between their incentives and those of the executives. It has little to do with their ability to “get it“.

So either their incentives are changed and they demonstrate that the organization can still be profitable, or you need to adapt to those incentives. What I believe is that adapting to those incentives means, in a nutshell, to provide a solid answer to the question, can we go faster by going together?

The proposed solution: if you want to go far, together is faster.

If we could find a positive correlation between efficiency/productivity and collaboration, we could change the proverb above by something like…

And hey, technically speaking, it would still be an African proverb, since I am from the Canary Islands, right?.

The idea behind the above sentence is to establish an AND relation between speed and collaboration meeting both, traditional corporate and Open Source (and Inner Source) goals.

Proving such positive correlation could be help to reduce the resistance offered by middle management to practice collaboration at scale either within Inner Source programs or Open Source projects. They would perceive such participation as a path to meet those longer term goals without contradicting many of the incentives they work with and they promote among their managees.

So the following question is, how we can do that? how can we provide evidences of such positive correlation in a language that is familiar to those managers?

The solution summarized: ISC Fall 2020

I tried to briefly explain to people running Inner Source programs, during the ISC Fall 2020, a potential path to establish such relation in five not-so-simple steps. The core slide of my presentation enumerated them as:

1.- In addition to collaboration and community health metrics, I recommend to track product delivery process performance ones.
2.- Correlate both groups of metrics.
3.- Focus on decisions and actions that creates a positive correlation between them.
4.- Create a reporting strategy to developers and managers based on such positive correlation.
5.- Use such strategy to turn around your Inner Source/Open Source story: it is about creating positive business impact at scale through open collaboration.

A detailed explanation of these five points can be found in the second article of this series:

If you want to go far, together is faster (II).

You can also watch the recording of my talk at ISC Fall 2020.

Software Product Inventory: what is it and how to implement it.

The concept of inventory applied to software, sometimes called catalogue, is not new. In IT/help-desk it usually refers to the software deployed in your organization. Along the history, there has been many IT Software Inventory Management tools. I first started to think about it beyond that meaning when working in deployments of Linux based desktops at scale.

The popularity that Open Source and Continuous Delivering is providing this traditionally static concept a wider scope as well as more relevance. It is still immature though, so read the article with that in mind.

1.- What is Inventory in software product development?

I like to think about the software inventory as the single source of truth of your software product so the main element for product development and delivery auditing purposes.

Isn’t that the source code?

Yes, but not only. The source code corresponding to the product that you ship (distribute) is a big part of it, but there are other important elements that should be considered part of the inventory like:

  • Requirements and/or tests, logs and results.
  • Technical documentation.
  • Tools and pipelines configuration files.
  • Packages, definitions or recipes…
  • Hashes, signatures, crypto libraries
  • License metadata, manifests, etc.
  • Metadata associated to security checks, permissions descriptions… .
  • Data associated with process performance metrics and monitoring/telemetry.
  • Many more…

When defined that way, the Software Inventory is a concept relevant in every stage of the software product life cycle. When you introduce, change, produce, publish, deploy or distribute any element of your product portfolio, your software inventory should change too.

There are two interesting considerations to add.

1.- If your product is part of a supply chain, like in any Open Source upstream/downstream environment, then the software inventory concept expands and become even more relevant since it can become an essential inbound-outbound control mechanism, even at acquisition time.

2.- In critical environments, especially safety critical ones, keeping such single source of truth goes beyond a “good practice”. Integrity, traceability and reproducibility for example, can be simpler to manage with a Product Software Inventory.

When you think about this particular case, it becomes clear to me that the elements that belongs to the inventory go beyond the actual deliverables or “product sources”. It should also include those elements and tools necessary to generate them, transform them, evaluate, deploy/ship them and evaluate its purpose.

2.- Static vs dynamic concept

Considering the above, the Software Product Inventory is a living construction, so dynamic, with the capacity to be frozen at any point in time (snapshot). This might seem obvious but it implies a different approach than supply and release management has traditionally considered (deliverables).

If evaluating, adding, modifying or managing elements of the inventory requires any action that significantly increases the cycle time of any specific stage, decompose those actions, parallelize them when possible and, when there is no choice, push it right in the pipelines. Ideally, no Software Product Inventory related activity should produce any friction in the code flow.

In a Continuous delivery environment, implementing the inventory requires to take actions across the entire development and delivery processes. Here are some points to consider at key stages:

2.1.- Inbound process: stage 0 of the development process

No software or any other element can become part of the product portfolio if it is not present in the Software Inventory. It make sense to implement the Product Inventory concept as part of the inbound stage (stage 0). Following Continuous Delivery principles and practices, here are some things to avoid vs promote:

  • Handovers or manual/committee-based vs code-review-like approval processes (pull vs push).
  • Completeness vs walking skeleton approach.
  • Management oriented (document based) vs engineering oriented (git based) tooling whenever possible.
  • Reports (manual) vs evidences (automated and reproducible) as output.
  • Access control vs signing and encrypting (if needed).

Avoid gate keeping activities. It is better to promote high throughput and short feedback loops than “quality gates” to improve product quality. If an evaluation is not completed, it is better to tag such piece of software as pending for a decision and letting the code flow than to have the engineers waiting for a decision of third parties.

I recognize that the concept might be too abstract to be easy to buy at first beyond the inbound and outbound (release/deploy) stages. Sadly, there is a strong tendency to pick up the concept at the inbound stage to establish early on a gate keeper, committee-based process to control the software that the developers use in the project, frequently compromising the code flow at a very early stage.

I prefer to focus on the procurement stage in the case of suppliers or how the relation is established with partners first. These are hand-over processes that heavily benefit from restructuring them, reducing the acquisition and on-boarding time and conditions.

More frequently that I would like to admit, Open Source is becoming a driver in this wrong direction, in many cases due to the proliferation of Open Source Offices in corporations that prefer to focus their initial attention in establishing specific policies for their own developers than in to changing their relations with partners and suppliers.

This is frequently due to a lack of understating of software product development at scale and what Continuous Delivery is about. In a nutshell, having their own engineers selecting the right Open Source software is prioritized over changing the relation with their existing commercial ecosystem, a more difficult but higher impact activity in many cases, according to my experience.

2.2.- Outbound process (deployment or release): last last stage of the delivery process.

The inventory accumulates all the elements required to ship/deploy the product plus all the elements required to recreate and evaluate the development and delivery process as well as the product itself, no matter if they are released or deployed. Ideally, this elements are evidence-based instead of report-based.

Like in the inbound case, each element of the Inventory should signed/encrypted as well as the overall snapshot, associated to the deployed/released product version. In case you are consuming or producing Free Software, please see Open Chain Project specification for more information about some good practices.

2.3.- Intermediate stages

As previously mentioned, the concept of Inventory is relevant at every stage of the development/delivery process. In general, it is all about generating additional/parallel outputs within the pipelines, signing and storing them in association with the related source code and binaries in a way that those evidences become “part of the product”. Using proprietary tools might break the trust chain in your process. This is something to consider carefully in safety critical environments. You will also need to consider the hardware, including dev. versions and prototypes.

A very interesting and open field is the Inventory concept in the context of safety critical certifications that traditionally have been very report-heavy-oriented. In this regard, I find the usage of system thinking very promising. Check Trustable Software, for instance.

3.- Some practical advice

3.1.- Walking skeleton vs completeness

I love the walking skeleton concept to design and implement processes in product development. It is significantly better to establish and end-to-end approach to the Inventory, where it has a light/soft/incomplete presence along the entire development/delivery cycle, than trying to implement it stage by stage following completeness, preventing you from having process-wide feedback loops.

It is not so much about doing it right as it is about moving fast in the right direction.

For instance, a frequent mistake is to concentrate most of the activities related to software license compliance on the inbound and outbound stages. Software license conformance and clearance has traditionally been perceived in many industries as a validation process performed by specialists, just like testing was done not so long ago, or a procurement action (acquisition).

Although lately more and more corporations are promoting the execution of license compliance activities at both stages (inbound and outbound), since they consume and ship more and more FOSS, they are still very report-based, specialists driven and management controlled activity.

I have witnessed enough dramatic situations to understand and promote that software license compliance is everybody’s job, just like tests or technical documentation (everything as code approach). Software license conformance and clearance, together with security, testing and technical documentation, can become the key drivers of the implementation of the Product Inventory concept. They share the same principles in this regard. The history of testing is the mirror to look at.

Decompose the software license compliance activities (conformance and clearance) and perform them across your pipelines. Start by executing simple conformance checks (REUSE) early on (inbound process, for instance). Coordinate such activities with the security team to also perform simple static code analysis. Agree with the architects or tech leaders in checking coding guidelines or other elements that can have a future impact in quality taking advantage of the Inventory concept. Add not just the software and the checks to the Inventory but also the results, logs and simple tools/scripts used.

More time consuming and intensive activities using more complex static code analysis or code scanning (licenses) tools can use the inventory as source (pull approach), instead of requesting the teams to perform such activities on their own (outside the pipelines) or establishing hand-over processes with specialists.

Be careful about how and when you include such activities in the pipelines though. Again, decompose and parallelize. And only when there is no choice, push these activities right in the delivery process. But do not break the code flow.

3.2.- Keep It Simple Stupid until it is not stupid anymore.

Here are some simple actions to start with…

At the beginning at least, use for the Inventory the same tools you are already using for the development of the product. Initially, your inventory can be nothing more than a file with a list of repos, hashes and links pointing at product elements location. This already have a value for security and software license compliance teams.

If you use a multi-repository approach pay attention to where the build tool pull the software from (definitions/recipes) to integrate the product. Make sure your initial inventory and the build tool are “in sync”. This will have a tremendous impact later on.

Export, sign and export the technical documentation living in your repositories (markdown, plantUML, .svg etc) as documents if they need to be part of the product deliverables, so you can establish simple checks to confirm their presence , integrity, etc.. This outputs should be also part of the Inventory as well.

Many of you already perform these and many other activities as part of the development and delivery process. The question is what to do with the associated metadata, the tooling used to generate them, the intermediate states required to get the output, the executed scripts, how to related them with the code and with other elements from previous stages of the process, how you guarantee their persistence, integrity, how you manage them at scale, how to store them etc..

3.3.- Act as an auditor

I always ask myself the following question: if I replicate the complete product development system providing all the product inputs for auditing purposes, how can I save time to the auditor so she does not need to understand the system itself and perform all the actions again to fully trust the output and the system itself (what/how we did, evidence based) and not myself or the workforce (who did it, report based)? Remember that you or your team will be such auditors in the future.

3.4.- Do not push the concept too early

If your stating point is very management (so reporting) driven, like for instance an environment where requirements or license information is generated and kept in .docx documents, the usage of binaries over source code is the norm, where proprietary tools are not questioned or where trust in the actual processes and outputs are based on who signs the associated reports (Officers) instead of in evidences, save your self headaches and do not try to push the Inventory concept beyond your area of direct influence. In my experience, it is worthless. You have a different battle to fight, a different and deeper problem to solve in such case.

In such cases, simply get ready in your area of influence for the future crisis to come, that will hit your product, so you get a chance to become part of the potential solution. Look for allies like the security, software license compliance and technical writing teams for instance, to expand that ring of influence. Hopefully they will see the value and the inventory how it can become a trigger to support modern and widely accepted practices within their domains across the organization.

4.- Summary

The Software Product Inventory is a high level (abstract) concept that will help you to move towards creating trustable software. Since it is part of the development process, following Continuous Delivery principles and practices to design and implement it is essential.

Some of the concepts behind this idea are probably present already in your organization but frequently using a management-driven approach, associated to the release and/or inbound stages, implemented in ways that generates a negative impact (friction) in the product code flow, so throughput and stability.

System thinking and asking questions as if you were an auditor will help you to implement simple measures first, and habits later, that will raise the quality of the product over time, even if the Product Inventory concept does not fly in your team or organization. That approach has helped me, at least.

Agile On The Beach: my first time

During 2017 and 2018 I got several interesting references at eagileonthebeachvents like Scale Summit and pipelineconf about an interesting event in the south-west of England about lean, agile and continuous delivery that got my attention. I decided that I could not miss it in 2019.

2019 edition of Agile On The Beach was the 9th one. The event has gain a good reputation among agilists for being a good mix of great content and relaxed atmosphere in a beautiful environment. This is not surprising for somebody coming from the Open Source space but is not a so common combination within the lean/agile/CD world.

I bought the tickets and reserved the accommodation on time (months before the event). As you know, I started in MBition in June. My employer was kind enough to make it easy for me to attend. So on July 10th I headed to Falmouth, Cornwall, UK to participate in Agile On The Beach during the following two days, coming back to Málaga on Saturday July 13th.

I liked the event. I felt like home. Like if I was at Akademy, for instance. There were some outstanding talks, a great atmosphere, talented and experienced attendees, reputed speakers, good organization and nice social activities.  Yes, the trip to get there was long but I had no delays in any of the several planes and trains I had to take (who said British trains are awful?) which allowed me to invest a few hours working on each way.

The videos has not been published yet. I will add as comments those I recommend you to check.

Update: please find the videos here.

Next year the event celebrates the 10th edition. They promised to do something special. If you are in the UK or are willing to invest several hours traveling to a good Lean/Agile/CD event, Agile On The Beach is a great one. But stay tuned, the number of tickets is limited and they are gone several months before the event starts.