If you want to go far, together is faster (II).

This is the second of a series of two articles describing the idea of correlating software product delivery process performance metrics with community health and collaboration metrics as a way to engage execution managers so their teams participate in Open Source projects and Inner Source programs. The first article is called If you want to go far, together is faster (I). Please read it before this post if you haven’t already. You can also watch the talk I gave at InnerSource Commons Fall 2020 that summarizes these series.

In the previous post I provided some background and described my perception about what causes resistance from managers to involve their development teams in Open Source projects and Inner Source programs. I enumerated five not-so-simple steps to reduce such resistance. This article explain such steps in some detail.

Let me start enumerating again the proposed steps:

1.- In addition to collaboration and community health metrics, I recommend to track product delivery process performance ones in Open Source projects and Inner Source programs.
2.- Correlate both groups of metrics.
3.- Focus on decisions and actions that creates a positive correlation between those two groups of metrics.
4.- Create a reporting strategy to developers and managers based on such positive correlation.
5.- Use such strategy to turn around your story: it is about creating positive business impact at scale through open collaboration.

The solution explained

1.- Collaboration and community health metrics as well as product delivery process performance metrics.

Most Open Source projects and Inner Source programs focus their initial efforts related with metrics in measuring collaboration as well as community healthiness. There is an Open Source project hosted by the Linux Foundation focused on the definition of many types of metrics. Collaboration and community health metrics are among the more mature ones. The project is called CHAOSS. You can find plenty of examples of these metrics applied in a variety of Open Source projects there too.

Inner Source programs are taking the experience developed by Open Source projects in this field and apply them internally so many of them are using such metrics (collaboration and community health) as the base to evaluate how successful they are. In our attempt to expand our study of these collaboration environments to areas directly related with productivity, efficiency etc., additional metrics should be considered.

Before getting into the core ones, I have to say that many projects pay attention to code review related metrics as well as defect management to evaluate productivity or performance. These metrics go in the right direction but they are only partial and, in order to demonstrate a clear relation between collaboration and productivity or performance for instance, they do not work very well in many cases. I will put a few examples why.

Code review is a standard practice among Open Source projects, but at scale is perceived by many as an inefficient activity compared to others, when knowledge transfer and mentorship are not a core goal. Pair or mob programing as well as code review restricted to team scale are practices perceived by many execution managers as more efficient in corporate environments.

When it comes to defect management, companies have been tracking these variables for a long time and it will be very hard for Open Source and Inner Source evangelists to convince execution managers that what you are doing in the open or in the Inner Source program is so much better ans specially cheaper than it is worth participating. For many of these managers, cost control goes first and code sustainability comes later, not the other way around.

Unsurprisingly, I recommend to focus on the delivery process of the software product production as a first step towards reducing the resistance from execution managers to embrace collaboration at scale. I pick the delivery process because it is deterministic, so it is simpler to apply process engineering (so metrics) than to any other stage of the product life cycle that involves development. From all the potential metrics, throughput and stability are the essential ones.

Throughput and Stability

It is not the point of this article to go deep into these metrics. I suggest you to refer to delivery or product managers at your organization that embrace Continuous Delivery principles and practices to get information about these core metrics. You can also read Steve Smith’s book Measuring Continuous Delivery, which defines the metrics in detail, characterize them and provide guidance on how to implement them and use them. You can find more details about this and other interesting books at the Reads section of this site, by the way.

There are several reasons for me to recommend these two metrics. Some of them are:

  • Both metrics characterize the performance of a system that processes a flow of elements. The software product delivery can be conceive as such a system where the information flows in the form of code commits, packages, images… .
  • Both metrics (sometimes in different forms/expressions) are widely used in other knowledge areas, in some cases for quite some time now, like networking, lean manufacturing, fluid dynamics… There is little magic behind them.
  • To me the most relevant characteristic is that, once your delivery system is modeled, both metrics can be applied at system level (globally) and at a specific stage (locally). This is extremely powerful when trying to improve the overall performance of the delivery process through local actions at specific points. You can track the effect of local improvements in the entire process.
  • Both metrics have simple units and are simple to measure. The complexity is operational when different tools are used across the delivery process. The usage of these metrics reduce the complexity to a technical problem.
  • Throughput and Stability are positively correlated when applying Continuous Delivery principles and practices. In addition, they can be used to track how good you are doing when moving from a discontinuous to a continuous delivery system. Several of the practices promoted by Continuous Delivery are already very popular among Open Source projects. In some cases, some would claim that they were invented there, way before Continuous Delivery was a thing in corporate environments. I love the chicken-egg debates… but not now.

Let’s assume from now on that I have convinced you that Throughput and Stability are the two metrics to focus on, in addition with the already in use collaboration and community health metrics your Open Source or Inner Source project is already using.

If you are not convinced, by the way, even after reading S. Smith book, you might want to check the most common references to Continuous Delivery. Dave Farley, one of the fathers of the Continuous Delivery movement, has a new series of videos you should watch. One of them deals with these two metrics.

2.- Correlate both groups of metrics

Let’s assume for a moment that you have implemented such delivery process metrics in several of the projects in your Inner Source initiative or across your delivery pipelines in your Open Source project. The following step is to introduce an Improvement Kata process to define and evaluate the outcome of specific actions over prestablished high level SMART goals. Such goals should aim for a correlation between both types of metrics (community health / collaboration and delivery process ones).

Let me put one example. It is widely understood in Open Source projects that being welcoming is a sign of good health. It is common to measure how many newcomers the project attract overtime and their initial journey within the community, looking for their consolidation as contributors. A similar thinking is followed in Inner Source projects.

The truth is that not always more capacity translate into higher throughput or an increase of process stability, on the contrary, it is a widely accepted among execution managers that the opposite is more likely in some cases. Unless the work structure, so the teams and the tooling, are oriented to embrace flexible capacity, high rates of capacity variability leads to inefficiencies. This is an example of an expected negative correlation.

In this particular case then, the goal is to extend the actions related with increasing our number of new contributors to our delivery process, ensuring that our system is sensitive to an increase of capacity at the expected rate and we can track it accordingly.

What do we have to do to mitigate the risks of increasing the Integration failure rate due to having an increase of throughput at commit stage? Can we increase our build capacity accordingly? Can our testing infrastructure digest the increase of builds derived from increasing our development capacity, assuming we keep the number of commits per triggered build?

In summary, work on the correlation of both groups of metrics, so link actions that would affect both, community health and collaboration metrics together with delivery metrics.

3.- Focus on decisions and actions that creates a positive correlation between both groups of metrics.

There will be executed actions designed to increase our number of contributors that might lead to a reduction of throughput or stability, others that might have a positive effect in one of them but not the other (spoiler alert, at some both will decrease) and some others that will increase both of them (positive correlation).

If you work in an environment where Continuous Delivery is the norm, those behind the execution will understand which actions have a positive correlation between throughput and stability. Your job will only be associated to link those actions with the ones you are familiar with in the community health and collaboration space. If not, you work will be harder, but still worth it.

For our particular case, you might find for instance, that a simple measure to digest the increasing number of commits (bug fixes) can be to scale up the build capacity if you have remaining budget. You might find though that you have problems doing so when reviewing acceptance criteria because you lack automation, or that your current testing-on-hardware capacity is almost fixed due to limitations in the system that manage your test benches and additional effort to improve the situation is required.

Establishing experiments that consider not just the collaboration side but also the software delivery one as well as translating into production those experiments that demonstrate a positive correlation of the target metrics, increasing all of them, might bring you to surprising results, sometimes far from common knowledge among those focused on collaboration aspects only, but closer to those focused in execution.

4.- Create a reporting strategy to developers and managers based on such positive correlation.

A board member of an organization I was managing, once told me what I follow ever since. It was something like…

Managers talk to managers through reports. Speak up clearly through them.

As manager I used to put a lot of thinking in the reporting strategy. I have some blog posts written about this point. Beside things like the language used or the OKRs and KPIs you base your reporting upon, understanding the motivations and background of the target audience of those reports is as important.

I suggest you pay attention to how those you want to convince about participating in Open Source or Inner Source projects report to their managers as well as how others report to them. Are those report time based? KPIs based, are they presented and discussed in 1:1s or in a team meeting? etc. Usually every senior manager dealing with execution have a consolidated way of reporting and being reported. Adapt to it instead of keeping the format we are more used to in open environments. I love reporting through a team or department blog but it might not be the best format for this case.

After creating and evaluating many reports about community health and collaboration activities, I suggest to change how they are conceived. Instead of focusing on collaboration growth and community health first and then in the consequences of such improvements for the organization (benefits), focus first on how product or project performance have improved while collaboration and community health has improved. In other words, change how cause-effect are presented.

The idea is to convince execution managers that by anticipating in Open Source projects or Inner Source programs, their teams can learn how to be more efficient and productive in short cycles while achieving long term goals they can present to executives. Help those managers also to present both type of achievements to their executives using your own reports.

For engineers, move the spotlight away from the growth of interactions among developers and put it in the increase of stability derived from making those interactions meaningful, for instance. Or try to correlate diversity metrics with defects management results, or with reductions in change failure rates or detected security vulnerabilities, etc. Move partially your reporting focus away from teams satisfaction (a common strategy within Open Source projects) and put it in team performance and productivity. They are obviously intimately related but tech leads and other key roles within your company might be more sensitive to the latter.

In summary, you achieve the proposed goal if execution managers can take the reports you present to them and insert them in theirs without re-interpreting the language, the figures, the datasets, the conclusions…

5.- Turn your story around.

If you manage to find positive correlations between the proposed metrics and report about those correlations in a way that is sensitive for execution managers, you will have established a very powerful platform to create an unbeatable story around your Inner Source program or your participation at Open Source projects. Investment growth will receive less resistance and it will be easier to infect execution units with practices and tools promoted through the collaboration program.

Prescriptors and evangelists will feel more supported in their viral infection and those responsible for these programs will gain an invaluable ally in their battle against legal, procurement, IP or risks departments, among others. Collaboration will not just be good for the developers or the company but also clearly for the product portfolio or the services. And not just in the long run but also in a shorter term. That is a significant difference.

Your story will be about increasing business impact through collaboration instead of about collaborating to achieve bigger business impact. Open collaboration environments increase productivity and have a tangible positive impact in the organization’s product/service, so it has a clear positive business impact.

Conclusion

In order to attract execution managers to promote the participation of their departments and teams in Open Source projects and Inner Source programs, I recommend to define a different communication strategy, one that rely in reports based on results provided by actions that show a positive correlation between community health and collaboration metrics with delivery process performance metrics, especially throughput and stability. This idea can be summarized in the following steps, explained in these two articles:

  • Collaboration within a commercial organization matters more to management if it has a measurable positive business impact.
  • To take decisions and evaluate their impact within your Inner Source program or the FLOSS community, combine collaboration and community health metrics with delivery metrics, fundamentally throughput and stability.
  • Prioritize those decisions/actions that produce a tangible positive correlation between these two groups of metrics.
  • Report, specially to managers, based on such positive correlation.
  • Adapt your Inner Source or Open Source story: increase business impact through collaboration.

In a nutshell, it all comes down to prove that, at scale…

if you want to go far, together is faster.

Check the first one of this article series if you haven’t. You can also watch the recording of the talk provided at ISC Fall 2020 where I summarized what is explained in these two articles.

I would like to thank the ISC Fall 2020 content committee and organizers for giving me the opportunity to participate in such interesting and well organized event.

Codethink is sponsoring Akademy 2018 and I am attending.

Back in July 2017 I wrote a blog post, published by Codethink, explaining why is a good business to support community driven FOSS events. This post is related to that one.

akademyLogo4Dot

I will be attending to Akademy 2018. It will take place in Vienna, Austria, from August 11th to 17th. I will be there representing Codethink, which is a proud sponsor of this 2018 edition.

I attend regularly to Akademy since, as most of you know, I have been an active contributor, a user of the software, a supporter of some of their activities and/or a KDE e.V. member for some time now. I learn a lot during this event, and not just about KDE related topics.

This edition has several specific points of interest to me:

  • I am involved in a project called BuildStream, a FOSS integration tool for declarative systems and applications. Currently its main user are the GNOME integration team and the Freedesktop SDK project. We would like to expand our user base among communities like KDE.
  • Freedesktop SDK are a platform and a SDK runtimes for flatpak apps and runtimes based on freedesktop modules. Several colleagues of mine are behind this project that is about to release a new version.
  • A year ago, during an Akademy BoF, some KDE contributors decided we wanted to put some effort towards enabling KDE software on automotive. This year the first modest results will be presented to the wider KDE community. I have been preaching about this move for some time now so it is exciting for me to see others involved and making progress.
  • I will attend to the KDE e.V. Annual General Assembly. KDE e.V. is the orga34f05-logo_kdenization that supports the KDE community which is an important activity.
  • I will update my working laptop from openSUSE Leap 42.3 to Leap 15, taking advantage of the presence at the event of a couple of former colleagues from the extinct openSUSE Team at SUSE, and Slimbook, the guys I bought my laptop from. Make sense, right?
  • Codethink is always looking for talent willing to move to Manchester, UK, or exceptionally, work remotely. Come talk to me if you might be interested.

It will be, as usual, a great event. See you all there.

Moving from a traditional product/release focused delivery model to a rolling model

The past few weeks the GDP delivery team together with some key contributors, has been working on a not very visible but still important change. The GDP project has put the basis to turn GDP release based delivery model to a “rolling” one. My colleagues will provide in a coming post the technical details behind this change. I want to provide a higher level view of what is happening and why.

Some background

GDP was born as a “demo” project. The main goal was to provide a platform to show the software components for automotive that the different GENIVI Expert Groups were developing. This was done through a delivery model focused on publishing a stable and easy to consume version of the project every few months, a major release.

Strictly speaking, GDP is a derivative. It is based on poky and uses Yocto tools to “create” the Linux based platform, adding the different components developed by the GENIVI Alliance together with upstream software. For the defined purpose, the release centric model works fine, especially if you concentrate your effort is very specific areas of the software stack with a small number of dependencies on the other areas, and a limited number of contributions and environments where the system should work.

During this 2016, the GDP has grown significantly. We have more software, more contributors, more components and more target boards to take care of. Although the above model has not been not challenged yet, it was just a matter of time.

As I explained in two previous posts [1][2], the GDP is moving from a being a Demo to a Development Platform. Changing the mission means changing the goals and the target group, which implies the need to adjust the deliverable to meet the new expectations.

So, right after the 14th AMM, the Delivery Team decided to change the delivery model to better meet the new mission, providing developers the newest possible software with the an increasing quality threshold. At the same time, in order to increase the number of contributors, the GDP needs to provide a new solid platform every once in a while. That should be done trough a solid release.

What is a rolling delivery model?

The key idea behind a modern delivery release model is to ensure that the transition from one stable release to the next one takes an affordable effort. I will put an example to picture the idea.

Problem statement

Imagine an organization that publishes one release per year. Let’s assume that a particular release included 100 patches developed by employees and, during the lifetime of the release (1 year too), another 100 patches were added to the product as bug fixes and updates. At the end of the release lifetime, the product includes 200 patches that define the value the product provides to customers and users.

Either for technical or business reasons, a year later it is time to upgrade. Our organization has to create a new Linux based system with newer upstream code and they have to integrate the patches from the previous release plus the updates and bug fixes developed for the coming release.

After a simplification process done by engineers, the number of patches needed to be integrated in this newer base system is reduced to 150. The organization also wants to add to this new release another 100 patches that represent the new features they have been developing during the last year for this new version.

The delivery team now has to integrate 250 patches in the new base system, 150 of them coming from the previous release. One might think that the effort required to do this is 2.5 times the effort invested in the previous release. Maybe you think that the effort is not so high since some of the patches have been developed thinking about the new base system. There are many other considerations like this one that might affect the initial estimation. This example is obviously a simplification.

However any experienced release manager will tell you that moving patches integrated in an older system base onto a newer one (forward-porting) requires additional effort, beyond a linear relation with the number of patches. Forward-porting is the “road of hell“. Iterate this example a few times and you will understand why there are so many organizations out there that have as many people focusing on delivery as they have in development. They migrated to Linux base system keeping the traditional delivery model they had while working with closed source software.

Release based delivery model

Possible solutions

One of the paths to improve the situation is upstreamming those changes that affect generic components. Some companies also upstream their new features early in their development process, generally looking for wider testing, or after they have been released to customers, to increase adoption and reduce future maintenance effort. This is definetly a must do.

From the delivery perspective, the most popular way to tackle the problem though is reducing the release cycle, so the number of patches to forward-port in each release is smaller. The development time and the maintenance cycles are also smaller. The same applies to the complexity of the forward-porting activities. “Jumping” from one release to the next one is easier to do. Add automation of repetitive tasks to this recipe and you feel you have a win…. for some time.

The journey through the “road to hell” becomes more comfortable, but our organization is still getting burned, even in the case that our customers and ourselves can digest releasing frequently. We all know how expensive and stressful a release might become.

The most suitable option to achieve sustainability while scaling up the amount of software an organization can manage without releasing more often than your market can digest is to change your delivery model.

Rolling delivery models are a serious attempt to solve this problem, putting integration as the central element instead of the software itself.
This model is not new. Gentoo has been doing it forever, but it was Arch Linux who implemented it in a way that immediately attracted the attention of thousands of developers. Still it was a model with no hope beyond hardcore Linux developers. openSUSE brought this model to a new level by implementing a process which output was stable enough for a much wider audience, and compatible with the release of a more stable and a commercial releases. Nowadays there are other interesting examples out there that commercial organizations can learn from.

What is a rolling model?

It is still hard to define but essentially it is a process in which ideally you have one continuous integration pipeline as the one an only entry point for the software you plan to ship. Releases then become snapshots of all or part of the software already integrated after going through a specific stabilization, deployment and release process.
So ideally, if you release a portfolio, you integrate only once, reducing significantly the costs of having different engineers working on different versions of the same software and forward-porting, among other benefits.

Rolling delivery model

So a rolling delivery model is a lot more than a continuous integration chain, although that is the key point.

Please have in mind that this is an oversimplification. This description doesn’t go into detail on other key aspects like maintenance cycles, how upstreamming affects the process, strategies towards updating the released products, etc.

A transformation process that takes an organization from a release centric model to a rolling one is about doing less and doing it faster, so less people can handle more software with less pain, allowing more people to concentrate in creating value, developing new and better software instead of just shipping it.

Back to GENIVI

Moving from a release centric to a rolling model is hard work. Frequently it is easier to start all over again. Since the GDP is still a relatively small project, we can afford going through the transformation process step by step.
The first stage has been creating that single integration chain and treating GDP-ivi9, our latest release, and those that follow it, as a deliverable of what we call today Master. Ideally, no single patch will be added directly to the release branches. They should come from Master. That way, we reduce (ideally to zero) the effort of forward-porting of patches while putting in the hands of our contributors the latest software on a regular basis.
To do so, we are in the process of adapting our simple processes and CI system to the new model, GDP repository structure, the wiki contents, the task management structures, several key policies, our communication around the project…

The GDP will face a very interesting challenge since this model needs to be proven successful for a derivative. If we are able to move fast enough, it will come the time in which we will need to decide if GDP keeps being a derivative or it becomes upstream, that is, either GDP limits the delivery speed based on the Poky release cycle, or we work upstream with the Yocto project to increase our delivery speed.

That is a good problem to have, isn’t it?

If (almost) everything goes right, after adding a few needed services in GENIVI’s infrastructure and ensuring the updated software is in compliance with selected verification criteria, the same number of people will be able to manage and deliver more software. And once the new processes become more stable, automation will not just increase efficiency, it will boost the project by allowing GENIVI to achieve goals that only big organizations with large delivery teams can do. This is the kind of transformation that takes time to consolidate, but has a huge impact.

Based on my experience, I believe that if GENIVI is able to sustain this effort and keep a clear direction the next couple of years, the benefits of moving towards a rolling model will be noticeable even outside the industry.

This blog post was originally published in the GENIVI blog site on 2016/07/04. I have adapted the formatting to adapt it to Blogger. The content should be the same.

Say Hi! to the new GENIVI Development Platform

On Wednesday February 17th, the GENIVI Alliance released a QEMU image of the GENIVI Demo Platform ivi9 Beta version, together with everything needed (instructions, source code, recepies, etc.) to build GDP-ivi9 with Yocto. A few weeks later, on March 8th, the first release candidate was published.
Finally, last April 19th GDP-ivi9 was published targeting QEMU, Renesas Porter and RPi2. Check the release announcement and download the different images and source code from the GDP download page.
I joined the GDP project in November 2015, leading a small team of developers from Codethink with the idea of moving GDP from a demo platform towards a collaboration platform. In summary, going from +r– to +rwx. 

What was GDP?

GENIVI Demo Platform was the compilation of middleware components developed by GENIVI integrated with Yocto or Baserock, based on poky, designed to showcase and test the work done by GENIVI’s Expert Groups.

What is GENIVI Development Platform?

At GENIVI’s 14th All Members Meeting (AMM) is was announced that GDP would change his name, from Demo Platform to Development Platform, reflecting the new spirit that has arisen during the delivery of the  GDP-ivi9 version.
The general idea will be to mature those GENIVI’s modules that were developed as proof of concepts (PoC) and provide up to date software together with a SDK, to attract developers to participate as contributors, having GDP as their number one Open Source platform for automotive.
Find further information about GENIVI Development Platform at GENIVI’s public wiki, in the GDP project pages. The name change, recently announced will be reflected in the wiki in the coming weeks. 

Coming actions

During the coming weeks, the GDP delivery team will focus on the following topics:
  • Migration from the current infrastructure to Github.
    • Confluence will remain as the project wiki and JIRA as the ticketing system. The same applies for the rest of GENIVI.
  • Add to our current targets another board: Intel Minnowboard
  • Define together with the GDP community the roadmap for the next GDP version.
  • Create a first alpha of the new version including the latest GENIVI software.
Feel free to propose enhancements or new features to GDP. The only thing you have to do is create a subtask under the ticket GDP-154, describe it and explain the benefits and potential risks/challenges. We will discuss them through the mailing list. I am looking forward of seeing Plasma 5 as part of GDP.

GENIVI 14th AMM and other events to promote GDP.

After te release of the new version, GDP maintainers and myself have been concentrated in making sure GDP was ready for  GENIVI’s 14th All Members Meeting (AMM), that took place in Paris from April 25th to 29th.
I participated as speaker in 3 sessions and my colleagues at Codethink delivered a couple of Hands on Sessions about GDE-ivi9. It has been a lot of work but a good finish line for this release cycle. We will publish the slides the coming days.
A few weeks earlier I presented the GDP project at the Embedded Linux Conference (ELC), that took place in San Diego from April 4th to 6th. It was my first time at this conference and I enjoyed it. I also participated at the Collaboration Summit, invited by AGL and the Linux Foundation. I will provide some more details about these events in a later post.
I plan to attend to QtCon to promote GDP among Qt/KDE developers and to the Automotive Linux Summit, that will take place in Japan, to spread the word about this open project for automotive. I have also confirmed my presence in June 2nd at the OpenExpo, in Madrid. It will be my first event in Spain in quite some time.

Summary

It has been a very busy 6 months but very productive. Leading a small but promising Open Source project, that might have a big influence within automotive in the future, working together with my colleagues at Codethink and GDP community members, has been very interesting. I am learning a lot about this industry…by doing.  

Virtue of Necessity. Canary, sublime your company.

The past July 16th I participated in the Tenerife LAN Party, in its section Tenerife Innova, invited by the Free Software Office of La Laguna University, included in the track titled (free translation from Spanish) Open Source from the Canary Islands, stories told in First Person.

This Free Software Office is well known in Spain for managing the biggest KDE deployment in Spain with 3k computers spread in several computer labs, laboratories and libraries, among other internal projects.

My talk (20 min) had as title something like: Virtue of Necessity. Canary, sublime your company. You can find the slides (in Spanish) in my site or in my Slideshare account.

What was the talk about?

The natural growth path for a software company is creating a site, grow until it reaches a point in which, a second production site is needed. Meanwhile, departments like sales and technical support might grow distributed. As the company grows, the number of production sites grow  with it. The organization structure vary with the nature of the company, sometime production teams are replicated across different sites, sometimes different business units are divided per site and others a particular site host teams that take care of different products-(micro)services.

In general, if the company follows an “agile” approach, it will try to reduce the inter-sites communication needs by placing the team members together in a particular site. Based on my experience and how FOSS has been developed, depending on the market you are playing, turning your company into a distributed environment might be a smart move. 

Let’s start providing some context and definitions.

What do I mean by sublimation in this context?

Sublimation is a state change in which a substance transit from the solid state to the gas state without going through the liquid one.

What do I mean by a distributed environment?

In most agile literature, in fact in most software development management books, distributed environments are multi-site distribution. But in Open Source, we refer to a different environment. I have came out with the following (subjective) definition:

A distributed environment/organization:

  • Has no site with a significant number of employees (developers). Most of them work remotely.
  • Access most (if not all) corporate application though web, not through VPN, that is, are WAN environments, not LAN.
  • Uses chat-IRC (or equivalent) and video conferences as the default synchronous communication channel.
  • Employees spread across several time zones.
  • Multicultural environment, having English as the default language.
  • Organize regular face to face sprints (as we understand it in FOSS=hackathons), maybe even a corporate event where the whole company or business unit get together.
Open Source as distributed environment

Free/Open Source Software has a geographically distributed nature. As you know, most of the relevant communities, no matter which size are we talking about, are formed by developers located in many different countries, working from home or a company site. If we take a look at the most relevant ones, they are truly global. Every tool, every process, has been designed (intentionally or not) having in mind this distributed nature.

Now that Open Source is everywhere, more and more companies are embracing it, participating on its development, collaborating in global communities. from the process perspective, they are being influenced by the Open Source way, including its distributed nature.

Many of the companies that are embracing Open Source are understanding that adapting to this new environment makes collaboration easier. It reduces the friction between community and internal processes. There is a long way to go but I believe it is unstoppable for a variety of reasons (out of the scope of the talk). We are starting to see more and more organization that are fully distributed and start-ups that are born with such structure in mind.

Canary Islands, a fragmented market.

The Canary Islands is a group of 7 islands in the Atlantic sea, with two major ones (900k people each) and 5 smaller, for a total of 2.1 million people and 11 million tourists per year. Obviously tourism is the main industry, so there are 6 international airports, two national ones and 10 harbours, half of which regularly receive big ships/cruises.

Data connectivity has improved a lot the last 10 years but, due to the difficult geography, it is unequally distributed across the islands. Even within the main islands, Tenerife and Gran Canaria, there is a significant percentage of surface with zero internet coverage.

So it is a very fragmented market and, although with first class communication infrastructure, travel across the islands takes a significant amount of time, it is expensive and connectivity might be a challenge. In general, the transportation strategy has been designed to bring people from Europe not for internal mobility.

This means that, as a software company, consolidation/growth in such market is tough, very tough, even if you focus on tourism.

Software companies there expand following the “natural” approach, which is by creating a software production centre in one of the main islands, providing support from there to the other islands. Until you consolidate your position, software companies cannot afford to have developers/technical support in the second main island. If service/support is required in one of the small islands, you simply travel there. The limitations that software companies has to face due to the market conditions, rarely allow you to create a second software development centre in the Canary Islands.

There are very few Spanish cities with daily direct planes from/to Canary Islands throughout the year. Madrid and Barcelona are the biggest markets but also the most expensive cities. The flight takes 2:30 hours to Madrid and 3 hours to Barcelona, which is a lot for European standards. So opening a second development centre in the mainland keeping the headquarters in the Canary Islands is a real challenge.

In other words, if you want to scale your business, you need to assume bigger risks than companies based in the continent, despite being a cheaper place and having plenty of professional due to the existence of two Universities.

… but,

In my talk I tried to show that all those limitations can be turned into advantages if organizations, early in their consolidation process, or even from the very beginning, adopt a distributed approach. These constrains offer a first class laboratory to experiment with some of the key variables that need to be managed when scaling up your company, while leaving aside some of the most complicated ones, related to great extend with the internationalization of the organization.

I made a call to sublimate your company, going from an “on-site” to a fully distributed state, ignoring the multi-site state. Even better, create your software company as a distributed environment since the very beginning.

Why sublimating your company in the Canary Islands?

I summarized the advantages of sublimating your company if you are based in the Canary Islands, Spain, in the following statements:

  • Distributed environments adapt better to the Canary Islands environment.
  • It will prepare you earlier for the internationalization phase, keeping a smaller size.
  • Distributed environments adapt well to certain business models and support needs, that are becoming popular nowadays.
  • If your company want or is already heavily involved in Open Source communities, adapting your internal processes to those collaborative environment you participate on is easier.
  • Talent attraction is less difficult.
  • Some of your fixed costs turn into production costs, that is, you gain flexibility.
Which variables will be affected by subliming your company?
These are the most relevant variables to consider:
  • Cost per employee: reduction of fixed costs per employee. Increase of travel costs.
  • Organization chart: from vertical to horizontal
  • Human Resources policies: training, coaching, people management. From f2f to online.
  • Data privacy and security: adapt to a WAN environment.
  • Tools: adapted to distributed with higher latency environments
  • Schedule / availability: culture shift, from presence to availability
  • Development and support methodologies: from f2f conversations to remote synchronous/asynchronous channels. From agile to FOSS? Is FOSS agile?. From 8-10 hours or rotation to a higher daily production/availability window.
  • Costumers relation/engagement: from tradition account management to “community” engagement management. Engineers interface customers.
  • Transportation and connectivity: employees need to be connected and travelling demands will increase. Accountability/reimbursement processes will be more complex.
  • Business model: your business model might need to adapt to your new distributed nature.
  • Potential market: the presence of employees in new areas and the fact that your processes are adapted to distributed environments might change your target market and/or nature/size of potential customers.
  • Competitors: the influence of your new distributed nature might alter your positioning against competitors.
There are more but these ones are the ones that should be considered carefully before subliming your company in the Canary Islands. As you grow, internationalization will knock at your door very soon. There are other variables to consider in that case. They are not the scope of this talk:
  • Time zones
  • Multiculturalism
  • Language barrier
  • Retributions and incentives. Cost of living.
  • Fiscal and employment legislation differences
  • Travel and accommodation costs and reimbursements
  • Accountability and taxes
  • Many more…
Summary

The Canary Islands is a tougher market than the mainland of Spain. Adopting early in the software company life cycle a distributed nature allow you to adapt better and faster to this environment, preparing you better for later stages too, specially the internationalization phase. Sublimation provides you a competitive advantage, specially if you develop Open Source and participate in open collaborative environments.

There are a number of variables that should be carefully considered though. Managing them correctly is a requirement to succeed.