Software events in Málaga

In May, there has been two interesting conferences in Málaga that  I have attended to:

J On The Beach

Since I moved back to Málaga, I have been trying to attend to this conference and for one reason or another, I couldn’t, until this 2018 edition. J On The Beach is the most international software event that takes place in the south of Spain on regular basis and this year it was no exception. It is a very well organised event, with good speakers, up to date topics and participants from many different countries. The conference is in English.

My expectations were high and the event met them. I had the chance to listen for the very first time to M. Hashimoto, that provided the audience an overview of Terraform. It is always a pleasure to listen to those who create the code you use. In that regard, he is becoming very popular for tools like Vagrant or Terraform itself. I enjoyed the talk very much as I did with the talks from  H. Karau and J. Amstrong. By the way, M. Hashimoto will be speaking at OSSJ 2018 in a few days so I will have the chance to see him again, but this time in Tokyo.

J On The Beach is a 3 days long event where the first one is focused in workshops and training sessions while the other two are mostly talks. After the closure a party is organised with good music and plenty of drinks.

I recommend to my readers to pay attention to next year edition. Add this event to your list. Tickets are sold fast so subscribe and pay attention to the newsletter to find out when can you get one.

OpenSouthCode

OpenSouthCode is a general purpose FOSS event, very popular among students and local hackers. Last year I talked about the FOSS automotive platforms developed by AGL and GENIVI. This year I provided an overview of CIP, the work that we are currently doing and near future plans. Check the slides for more information.

OpenSouthCode in a two days event. The first one, on a Friday, is all about workshops, meetup and training activities while the second one, on a Saturday, is reserved for talks in Spanish.

If you live in the South of Spain or happen to be around when this event takes place, I totally recommend it.

Coming Events to Málaga

There are two additional software events in Málaga to pay attention to:

Gamepolis

There are more and more international software companies coming to Málaga and several of them are gaming companies,. There are also a few Spanish ones. If you are into games development or you are a avid gamer, this will be a great event for you.

PyConES 2018

Codethink sponsored PyConES 2017 and several of my colleagues attended together with me. One of them, Pedro Álvarez was part of the organization. It took place in Cáceres, Extremadura. This edition will take place in Málaga. I plan to attend again. This is a 400 people event packed of Python developers.

Further notes

I would like to see more international FOSS events coming to Málaga. It is a great place to organise a conference: it has a big airport with connections to all major hubs in Europe, direct flights to many European cities, fast train to both, Madrid and Barcelona, many accommodations from a wide range of prices and quality, a big congress palace and hotels to host events, good weather most of the year, the beach, Granada, Ronda, Antequera or Sevilla are close enough… and an increasing number of software companies opening offices in the area. It is nice to not having to travel to attend to a good conference once in a while.

Moving from a traditional product/release focused delivery model to a rolling model

The past few weeks the GDP delivery team together with some key contributors, has been working on a not very visible but still important change. The GDP project has put the basis to turn GDP release based delivery model to a “rolling” one. My colleagues will provide in a coming post the technical details behind this change. I want to provide a higher level view of what is happening and why.

Some background

GDP was born as a “demo” project. The main goal was to provide a platform to show the software components for automotive that the different GENIVI Expert Groups were developing. This was done through a delivery model focused on publishing a stable and easy to consume version of the project every few months, a major release.

Strictly speaking, GDP is a derivative. It is based on poky and uses Yocto tools to “create” the Linux based platform, adding the different components developed by the GENIVI Alliance together with upstream software. For the defined purpose, the release centric model works fine, especially if you concentrate your effort is very specific areas of the software stack with a small number of dependencies on the other areas, and a limited number of contributions and environments where the system should work.

During this 2016, the GDP has grown significantly. We have more software, more contributors, more components and more target boards to take care of. Although the above model has not been not challenged yet, it was just a matter of time.

As I explained in two previous posts [1][2], the GDP is moving from a being a Demo to a Development Platform. Changing the mission means changing the goals and the target group, which implies the need to adjust the deliverable to meet the new expectations.

So, right after the 14th AMM, the Delivery Team decided to change the delivery model to better meet the new mission, providing developers the newest possible software with the an increasing quality threshold. At the same time, in order to increase the number of contributors, the GDP needs to provide a new solid platform every once in a while. That should be done trough a solid release.

What is a rolling delivery model?

The key idea behind a modern delivery release model is to ensure that the transition from one stable release to the next one takes an affordable effort. I will put an example to picture the idea.

Problem statement

Imagine an organization that publishes one release per year. Let’s assume that a particular release included 100 patches developed by employees and, during the lifetime of the release (1 year too), another 100 patches were added to the product as bug fixes and updates. At the end of the release lifetime, the product includes 200 patches that define the value the product provides to customers and users.

Either for technical or business reasons, a year later it is time to upgrade. Our organization has to create a new Linux based system with newer upstream code and they have to integrate the patches from the previous release plus the updates and bug fixes developed for the coming release.

After a simplification process done by engineers, the number of patches needed to be integrated in this newer base system is reduced to 150. The organization also wants to add to this new release another 100 patches that represent the new features they have been developing during the last year for this new version.

The delivery team now has to integrate 250 patches in the new base system, 150 of them coming from the previous release. One might think that the effort required to do this is 2.5 times the effort invested in the previous release. Maybe you think that the effort is not so high since some of the patches have been developed thinking about the new base system. There are many other considerations like this one that might affect the initial estimation. This example is obviously a simplification.

However any experienced release manager will tell you that moving patches integrated in an older system base onto a newer one (forward-porting) requires additional effort, beyond a linear relation with the number of patches. Forward-porting is the “road of hell“. Iterate this example a few times and you will understand why there are so many organizations out there that have as many people focusing on delivery as they have in development. They migrated to Linux base system keeping the traditional delivery model they had while working with closed source software.

Release based delivery model

Possible solutions

One of the paths to improve the situation is upstreamming those changes that affect generic components. Some companies also upstream their new features early in their development process, generally looking for wider testing, or after they have been released to customers, to increase adoption and reduce future maintenance effort. This is definetly a must do.

From the delivery perspective, the most popular way to tackle the problem though is reducing the release cycle, so the number of patches to forward-port in each release is smaller. The development time and the maintenance cycles are also smaller. The same applies to the complexity of the forward-porting activities. “Jumping” from one release to the next one is easier to do. Add automation of repetitive tasks to this recipe and you feel you have a win…. for some time.

The journey through the “road to hell” becomes more comfortable, but our organization is still getting burned, even in the case that our customers and ourselves can digest releasing frequently. We all know how expensive and stressful a release might become.

The most suitable option to achieve sustainability while scaling up the amount of software an organization can manage without releasing more often than your market can digest is to change your delivery model.

Rolling delivery models are a serious attempt to solve this problem, putting integration as the central element instead of the software itself.
This model is not new. Gentoo has been doing it forever, but it was Arch Linux who implemented it in a way that immediately attracted the attention of thousands of developers. Still it was a model with no hope beyond hardcore Linux developers. openSUSE brought this model to a new level by implementing a process which output was stable enough for a much wider audience, and compatible with the release of a more stable and a commercial releases. Nowadays there are other interesting examples out there that commercial organizations can learn from.

What is a rolling model?

It is still hard to define but essentially it is a process in which ideally you have one continuous integration pipeline as the one an only entry point for the software you plan to ship. Releases then become snapshots of all or part of the software already integrated after going through a specific stabilization, deployment and release process.
So ideally, if you release a portfolio, you integrate only once, reducing significantly the costs of having different engineers working on different versions of the same software and forward-porting, among other benefits.

Rolling delivery model

So a rolling delivery model is a lot more than a continuous integration chain, although that is the key point.

Please have in mind that this is an oversimplification. This description doesn’t go into detail on other key aspects like maintenance cycles, how upstreamming affects the process, strategies towards updating the released products, etc.

A transformation process that takes an organization from a release centric model to a rolling one is about doing less and doing it faster, so less people can handle more software with less pain, allowing more people to concentrate in creating value, developing new and better software instead of just shipping it.

Back to GENIVI

Moving from a release centric to a rolling model is hard work. Frequently it is easier to start all over again. Since the GDP is still a relatively small project, we can afford going through the transformation process step by step.
The first stage has been creating that single integration chain and treating GDP-ivi9, our latest release, and those that follow it, as a deliverable of what we call today Master. Ideally, no single patch will be added directly to the release branches. They should come from Master. That way, we reduce (ideally to zero) the effort of forward-porting of patches while putting in the hands of our contributors the latest software on a regular basis.
To do so, we are in the process of adapting our simple processes and CI system to the new model, GDP repository structure, the wiki contents, the task management structures, several key policies, our communication around the project…

The GDP will face a very interesting challenge since this model needs to be proven successful for a derivative. If we are able to move fast enough, it will come the time in which we will need to decide if GDP keeps being a derivative or it becomes upstream, that is, either GDP limits the delivery speed based on the Poky release cycle, or we work upstream with the Yocto project to increase our delivery speed.

That is a good problem to have, isn’t it?

If (almost) everything goes right, after adding a few needed services in GENIVI’s infrastructure and ensuring the updated software is in compliance with selected verification criteria, the same number of people will be able to manage and deliver more software. And once the new processes become more stable, automation will not just increase efficiency, it will boost the project by allowing GENIVI to achieve goals that only big organizations with large delivery teams can do. This is the kind of transformation that takes time to consolidate, but has a huge impact.

Based on my experience, I believe that if GENIVI is able to sustain this effort and keep a clear direction the next couple of years, the benefits of moving towards a rolling model will be noticeable even outside the industry.

This blog post was originally published in the GENIVI blog site on 2016/07/04. I have adapted the formatting to adapt it to Blogger. The content should be the same.

Embrace Open Source culture: the 5 common transformations.

Article originally published at Linkedin on May 15th 2016.

This is a story of what I have lived or witnessed a few times so far. A story of an organization that used to consume, develop and ship proprietary software for many years. At some point in time, management took the decision of using Open Source. Like in most cases, the decision was forced by its customers, providers, competitors… and by numbers.

A painful but unavoidable transformation was required.

1.- Open Source consumer

Engineers had to learn a new system, adapt or re-write those features that used to made the organization unique, together with many other painful actions. It was expensive at the beginning but, due to the cost reduction in licenses and the change that Linux represented in the relation with providers, in a few years it was clearly worth it. And if it wasn’t, it didn’t matter since it was what the market demanded. There was no way back

Every software organization has gone through their unique journey, but the final sentence of the story has been the same for all of them: they became Open Source consumers.

2.- Open Source producer

This organization gained control over its production and, by consuming Open Source, it could focus many resources in differentiation, without changing the structure, development and delivery processes. At some point, it was shipping products that involved a significant percentage of generic software taken “from internet”.

It became an Open Source producer.

You can recognise such organizations because they frequently create a specific group, usually linked to R&D, in change of bringing all the innovation that is happening “in the Open Source community” into the organization.

Little by little this organisation realised that giving fast and satisfactory answers to their customer demands became more and more expensive. They got stuck in what rapidly became an old kernel or tool chain version…. Bringing innovation from “the community” required back-porting, solving complex integration issues, incompatibilities with what your provider brings, what your customer wants.

So they have to upgrade.

This organization will be able now to take advantage of all the common features and compatibility that the new kernel, the new tool chain… brings. But, guess what, forward porting all the differentiation features this organization has developed, all the bug fixes, is so much work and so complicated that the challenge put the organization at risk.

3.- Open Source contributors

The organization feels now the downsides of becoming a blind Open Source consumer and producer. Execs feels like when a bubble explodes and they are inside of it. They has less control than they thought, which turns out to be expensive, and what is worse, they lack the expertise within the organization to gain it….

But, after struggling for some time, this organization survived, which means that it has learned some lessons:

  • Upstream those features that are not differentiation factors any more.
  • Increase the investment in that reduced groups of rock-stars that are up to date of what’s going on “in the different communities”.
  • Invest in those Open Source projects that develop the key software you consume.
  • Even better, start your own Open Source project to promote your technologies and be perceived as a leader…
  • Reduce the upgrade cycle, so the “upgrade pain” is lower. As a side effect, the organization has the opportunity to increase the cash flow when doing two smaller upgrades instead of a big one. At the very end, the real profit comes with the first update, not with the new version, right?

This organization ended up upstreaming features when they could, normally very late, because “they do not have time”, frequently assigning that task to young inexperienced developers or, even “better”, subcontracting it, which is “cheaper”.

You can recognise that this organization has gone through the described process when attending to conferences given by any of its executives. They cannot stop talking about how much they contribute to this and that community, about how awesome the community is, how important it is to be open and share…. they are referring all the time to communities/upstream as “us” and “them”. They think they got it, they really do.

Most of them believe they are in the crest of the wave after going through this third transformation process. They are innovative, they has been able to reduce their time to market, they are gaining reputation within a variety of communities… They are not just Open Source consumers and producers any more. They are also contributors. Some of them even heavy and “successful” contributors.

But if you look closer, they have not adopted “the Open Source way”.

This organization keeps their traditional processes intact. It is managed in the same way. Decision processes are taken like when it was a proprietary company, it has not improved transparency significantly, it does not share code and practices among departments…. there is a totally different reality in front and behind its firewall, between production and R&D, between management, engineering, customer support, etc..

This organization face friction because of this reality. It still cannot move fast enough. Upgrading is still too expensive since now they have to do it more often than years ago, upstreaming goes so slow, when it happens. They cannot control the communities they are investing on…

So going through a forth transformation becomes unavoidable. Some refer to this transformation as “upstream first”.

4.- Upstream first or becoming a good Open Source citizen

This fourth transformation basically means that upstreaming is part of your development process, not an aside task. It also means that communities are part of your delivery strategy, not an after market topic, that R&D is a two way road where you do not just consume innovation created by others but you share yours, not just “promote it”. You really need to get involved.

This organization will learn that by becoming more open, their engineers learn more and faster, so the organization itself. It is at this stage where the organization really understand where the real value is in the software they produce compared to what is commodity…or that is what its executives and managers most likely believe, once again. 🙂

But open source (no capital letters any more) is not about being open, but about being transparent, which means that is not just about seeing what is behind the glass, but also understand it.

I believe the fictional organization I am talking about will have to take one more step, the fifth one. It will be about “becoming upstream”.

5.- Becoming upstream or being an open source organization

This is about understanding that, if you consume, produce and contribute Open Source, the smart thing to do is becoming an open source company. I think it is naive to pretend taking full advantage of  Open Source while keeping your traditional corporate culture, which collides with the one of those who produces most of the software you consume and ship, who are your “upstream”. You are building your business on top of them. Since you cannot control them, become “them”.

The smart thing to do is to surf the wave, not fight against it, generating friction. Any manager knows that friction is expensive, reduces focus and drives away talent.  It is bad for the business.

The required culture change to succeed in this fifth transformation involves thinking less about us (company and customers) and them (community), and more about us (ecosystem). It can’t be any more about upstream and downstream but about technology and service. It has to be less about upgrading and more about updating, less about “manage” and more about “lead” at every level of the organization, not just referred to execs and managers.

It is a transformation in which engineers are empowered, where management is more focused on collecting information for execs instead of producing it, and after decisions are taken, their key focus is alignment. A transformation in which execs get closer to where the real value is, to people, because they are the “masters”. A transformation in which engineers not just follow, they get exposed, they take responsibility and assume the consequences… getting paid for it.

An environment in which accessing to key information does not depend on your position within the organization chart, which means that power does not depend so much  on what others ignore, but decisions are taken based on shared knowledge. A culture in which transparency is the norm not the exception.

In summary, a transformation that leads to a stage in which the organization steers its ecosystem instead of driving it. So it leads it in a sustainable way.

A quimera?

I understand it might sound like a quimera, but:
  1. No more than it would have sounded 15 years ago any of the stories that so many CEOs or Open Source Program managers from leading corporations are telling nowadays in popular FOSS events about “their transformation”.
  2. I do not think the debate is if this fifth transformation will be needed, but about when and how to go through it.
  3. My +10 years of experience in Open Source and +17 as manager tells me that, waiting to face any of the first four transformation processes until you have no choice is an unnecessary risk. I suspect the same will apply to the fifth one.
So my message is,
  1. Consume, produce and contribute to open source being a good citizen.
  2. Embrace Open Source culture… better sooner than later.

 

Testing => quality. Really?

Introduction

Nowadays the topic automated testing is becoming mainstream. Organizations and projects are investing significant effort in creating tests, using tools to automate them and plug them in their delivery chain. Combined with continuous integration tools, automate testing increases the usefulness significantly. I obviously find this trend unavoidable. Sooner or later every software organization will eventually go through it, if they have not already.

This movement is fairly new. Concepts like automate testing or continuous testing, in the context of continuous delivery, still do not have 10 years of history. We need to be careful with trends. The topic is so hot these days that the association between automated testing and quality is becoming the norm, also in Open Source.

Open Source became the winning “culture” in several industries more than five or ten years ago. Automated testing in the context of continuous delivery was not popular back then. Still, Open Source influence and adoption expanded also because of superior quality.

How come?

When I think about quality in Open Source, one key principle and three actions come to my mind.

 

Principle: transparency

Transparency is about seeing what others are doing, but also about understanding. This second part is too often forgotten.

Action 1: Code review

Transparent code review, (again, see & understand) is, in my opinion, the most powerful quality assurance measure a project or organization can apply. It is the fundamental action in what some call the FLOSS development model.

It has a side effect that I really like as manager: it improves younger developers skills. It also brings many other positive side effects.

 

Action 2: dogfooding

A few weeks ago in a workshop with a customer, Codethink CEO Paul Sherwood was explaining this point with an example that I stopped talking about several years ago. I found it so obvious that at some point I gave up fighting for it. After listening to him, not anymore. The example was…. your organization is developing Linux based products, use Linux, not Windows.

Simple, right?

Dogfooding is another of those actions that in long term Open Source projects is frequently taken for granted but that is not the norm in commercial environment. So many projects driven by newcomers to Open Source do not pay enough attention to it.

The impact over quality of dogfooding in the mid term is impossible to calculate. Still I believe is huge.

Action 3: delivery model that maximises the influence of early adopters

Who are early adopters? They are the developers or power users who like to consume experimental or pre-releases of your “product”. The number of those willing to report bugs is significantly bigger in relative numbers than in consumers.

Increasing the number of early adopters, reducing the hurdles they face to use your software, analyse/debug problems and report should be a key activity among those projects worried about quality assurance. Adapting your delivery process to maximise their impact, not just have a positive effect in the use cases your software was designed for, but in others, expanding the knowledge about how your software will behave in the hands of users. Like it should happen between developers and delivery engineers, the feedback loop with early adopters should be very short, so you can provide them improved pre-releases in short cycles.

Open Source has reached the current point understanding how important the role that early adopters play is.

Personal note about this third topic

I want to make a point here before moving forward.

It seems to me that there is a new wave of Open Source projects, specially those driven by commercial organizations, that underestimate the mid term effect early adopters have on the quality of a project. I also see how the Continuous Delivery hupe, focused on the developers and delivery engineers, is leaving the early adopters behind in some cases. Specially in those Open Source projects in which the project is developed and delivered by full time dedicated engineers.

Many projects pay little attention to making their frequent releases truly installable, documented, simple to debug without complicated tools or even centralised infrastructure, bug trackers simple/fast to use, treat bug reports as a valuable asset . In summary, early adopters cannot follow the pace and, when they do, they need to spend a lot of energy to be valuable.

Let’s go back to the main argument.

 

Conclusion

Code review, dogfooding and early adopters in transparent environments has been, I believe, the pillars that has made Open Source what it is today in terms of quality. And then, only then, automate testing, or continuous testing comes to place, in addition, not in substitution, not before, not in between…. in addition.

Are you doing Open Source? Don’t take shortcuts. Surf the “trend wave” instead of embrace it blindly. Learn first, look carefully what sustainable projects are doing.

Quality is as much as culture as it is about having a nice dashboard full of green lights. Testing => Quality is, in general, a wrong association of ideas.

And yes, test frameworks, board farms executing thousands of tests, green lights in dashboards, etc. are awesome. Probably a forth pillar in the coming future.

Virtue of Necessity. Canary, sublime your company.

The past July 16th I participated in the Tenerife LAN Party, in its section Tenerife Innova, invited by the Free Software Office of La Laguna University, included in the track titled (free translation from Spanish) Open Source from the Canary Islands, stories told in First Person.

This Free Software Office is well known in Spain for managing the biggest KDE deployment in Spain with 3k computers spread in several computer labs, laboratories and libraries, among other internal projects.

My talk (20 min) had as title something like: Virtue of Necessity. Canary, sublime your company. You can find the slides (in Spanish) in my site or in my Slideshare account.

What was the talk about?

The natural growth path for a software company is creating a site, grow until it reaches a point in which, a second production site is needed. Meanwhile, departments like sales and technical support might grow distributed. As the company grows, the number of production sites grow  with it. The organization structure vary with the nature of the company, sometime production teams are replicated across different sites, sometimes different business units are divided per site and others a particular site host teams that take care of different products-(micro)services.

In general, if the company follows an “agile” approach, it will try to reduce the inter-sites communication needs by placing the team members together in a particular site. Based on my experience and how FOSS has been developed, depending on the market you are playing, turning your company into a distributed environment might be a smart move. 

Let’s start providing some context and definitions.

What do I mean by sublimation in this context?

Sublimation is a state change in which a substance transit from the solid state to the gas state without going through the liquid one.

What do I mean by a distributed environment?

In most agile literature, in fact in most software development management books, distributed environments are multi-site distribution. But in Open Source, we refer to a different environment. I have came out with the following (subjective) definition:

A distributed environment/organization:

  • Has no site with a significant number of employees (developers). Most of them work remotely.
  • Access most (if not all) corporate application though web, not through VPN, that is, are WAN environments, not LAN.
  • Uses chat-IRC (or equivalent) and video conferences as the default synchronous communication channel.
  • Employees spread across several time zones.
  • Multicultural environment, having English as the default language.
  • Organize regular face to face sprints (as we understand it in FOSS=hackathons), maybe even a corporate event where the whole company or business unit get together.
Open Source as distributed environment

Free/Open Source Software has a geographically distributed nature. As you know, most of the relevant communities, no matter which size are we talking about, are formed by developers located in many different countries, working from home or a company site. If we take a look at the most relevant ones, they are truly global. Every tool, every process, has been designed (intentionally or not) having in mind this distributed nature.

Now that Open Source is everywhere, more and more companies are embracing it, participating on its development, collaborating in global communities. from the process perspective, they are being influenced by the Open Source way, including its distributed nature.

Many of the companies that are embracing Open Source are understanding that adapting to this new environment makes collaboration easier. It reduces the friction between community and internal processes. There is a long way to go but I believe it is unstoppable for a variety of reasons (out of the scope of the talk). We are starting to see more and more organization that are fully distributed and start-ups that are born with such structure in mind.

Canary Islands, a fragmented market.

The Canary Islands is a group of 7 islands in the Atlantic sea, with two major ones (900k people each) and 5 smaller, for a total of 2.1 million people and 11 million tourists per year. Obviously tourism is the main industry, so there are 6 international airports, two national ones and 10 harbours, half of which regularly receive big ships/cruises.

Data connectivity has improved a lot the last 10 years but, due to the difficult geography, it is unequally distributed across the islands. Even within the main islands, Tenerife and Gran Canaria, there is a significant percentage of surface with zero internet coverage.

So it is a very fragmented market and, although with first class communication infrastructure, travel across the islands takes a significant amount of time, it is expensive and connectivity might be a challenge. In general, the transportation strategy has been designed to bring people from Europe not for internal mobility.

This means that, as a software company, consolidation/growth in such market is tough, very tough, even if you focus on tourism.

Software companies there expand following the “natural” approach, which is by creating a software production centre in one of the main islands, providing support from there to the other islands. Until you consolidate your position, software companies cannot afford to have developers/technical support in the second main island. If service/support is required in one of the small islands, you simply travel there. The limitations that software companies has to face due to the market conditions, rarely allow you to create a second software development centre in the Canary Islands.

There are very few Spanish cities with daily direct planes from/to Canary Islands throughout the year. Madrid and Barcelona are the biggest markets but also the most expensive cities. The flight takes 2:30 hours to Madrid and 3 hours to Barcelona, which is a lot for European standards. So opening a second development centre in the mainland keeping the headquarters in the Canary Islands is a real challenge.

In other words, if you want to scale your business, you need to assume bigger risks than companies based in the continent, despite being a cheaper place and having plenty of professional due to the existence of two Universities.

… but,

In my talk I tried to show that all those limitations can be turned into advantages if organizations, early in their consolidation process, or even from the very beginning, adopt a distributed approach. These constrains offer a first class laboratory to experiment with some of the key variables that need to be managed when scaling up your company, while leaving aside some of the most complicated ones, related to great extend with the internationalization of the organization.

I made a call to sublimate your company, going from an “on-site” to a fully distributed state, ignoring the multi-site state. Even better, create your software company as a distributed environment since the very beginning.

Why sublimating your company in the Canary Islands?

I summarized the advantages of sublimating your company if you are based in the Canary Islands, Spain, in the following statements:

  • Distributed environments adapt better to the Canary Islands environment.
  • It will prepare you earlier for the internationalization phase, keeping a smaller size.
  • Distributed environments adapt well to certain business models and support needs, that are becoming popular nowadays.
  • If your company want or is already heavily involved in Open Source communities, adapting your internal processes to those collaborative environment you participate on is easier.
  • Talent attraction is less difficult.
  • Some of your fixed costs turn into production costs, that is, you gain flexibility.
Which variables will be affected by subliming your company?
These are the most relevant variables to consider:
  • Cost per employee: reduction of fixed costs per employee. Increase of travel costs.
  • Organization chart: from vertical to horizontal
  • Human Resources policies: training, coaching, people management. From f2f to online.
  • Data privacy and security: adapt to a WAN environment.
  • Tools: adapted to distributed with higher latency environments
  • Schedule / availability: culture shift, from presence to availability
  • Development and support methodologies: from f2f conversations to remote synchronous/asynchronous channels. From agile to FOSS? Is FOSS agile?. From 8-10 hours or rotation to a higher daily production/availability window.
  • Costumers relation/engagement: from tradition account management to “community” engagement management. Engineers interface customers.
  • Transportation and connectivity: employees need to be connected and travelling demands will increase. Accountability/reimbursement processes will be more complex.
  • Business model: your business model might need to adapt to your new distributed nature.
  • Potential market: the presence of employees in new areas and the fact that your processes are adapted to distributed environments might change your target market and/or nature/size of potential customers.
  • Competitors: the influence of your new distributed nature might alter your positioning against competitors.
There are more but these ones are the ones that should be considered carefully before subliming your company in the Canary Islands. As you grow, internationalization will knock at your door very soon. There are other variables to consider in that case. They are not the scope of this talk:
  • Time zones
  • Multiculturalism
  • Language barrier
  • Retributions and incentives. Cost of living.
  • Fiscal and employment legislation differences
  • Travel and accommodation costs and reimbursements
  • Accountability and taxes
  • Many more…
Summary

The Canary Islands is a tougher market than the mainland of Spain. Adopting early in the software company life cycle a distributed nature allow you to adapt better and faster to this environment, preparing you better for later stages too, specially the internationalization phase. Sublimation provides you a competitive advantage, specially if you develop Open Source and participate in open collaborative environments.

There are a number of variables that should be carefully considered though. Managing them correctly is a requirement to succeed.