The value of introducing CI/CD

Updated: Sep 6, 2021

Large companies e.g. ISPs (Internet-Access-Provider) or Financial Institutes offer many services based on classically developed software which was originally designed for e.g., mainframes and has recently been ported onto virtualized systems, such as vSphere, running on x86 based platforms. This common methodology allows cost-efficient renewal and higher scalability of legacy software systems. A caveat of course is the still rather slow ability to add new features and services due to complex and clumsy development, testing and rollout processes compared to modern, lean and cost-efficient CI/CD based service offerings.


What is CI / CD?

It is a method that regularly delivers applications and services to customers and automates all phases of application development. The main concepts of CI/CD are Continuous Integration, Continuous Delivery and Continuous Deployment. It simplifies code deployment and code integration for DevOps teams fundamentally.


In particular, CI/CD ensures continuous automation and monitoring over the entire lifecycle, from the integration and test to the provision and implementation phase. These interrelated practices are often referred to as the “CI/CD pipeline,” and they are supported by agile collaboration among DevOps teams.



Common assumptions as reason behind reluctance

Although ISPs would wish to leverage from those benefits of a disruptive continuous integration and continuous delivery and canary deployment, they shy away from the risk trying to organically migrate to a next step of modernization and optimization. Not only reluctant because of the potential financial risk, but even more restrained concerning necessary culture change and lacking skill levels in their organizations.

There is a certain mindset, that CI/CD must be introduced at once, basically in a disruptive manner and it cannot be smoothly integrated in running systems. Therefore, many customers assume CI/CD can only be applied with new green field approaches and legacy systems should remain untouched until they need replacement due to excessively high operational costs.


We made the experience that this assumption is not necessarily right.


A smooth migration of legacy solutions to CI/CD can be achieved step by step, starting with a small number of applications and adapting more of them going forward. This procedure provides enough time to prepare for the necessary organizational changes and processes and allows to keep maximum control on the migration process going baby steps from one proof point to the next, to finally reach a 100% migration to CI/CD.


The challenge to resolve

For a successful introduction of CI/CD into a legacy-based environment, it is indispensable to clearly define the specific project goals (Milestones and MVPs), their success- and exit-criteria.


What means MVP “Minimum Viable Product”?

Many projects fail because of the wrong assumption that the entire product can be understood and developed at once. A far better way can be to develop instead of one full and inclusive product, step by step small, but functioning products – so called Minimum Viable Products. The customer is free to decide and even has the chance to change his mind and perception during the process in order to be able to get the best he can.


An example is shown in the picture:


Source: https://www.youtube.com/watch?v=0P7nCmln7PM&t=15s

with additional reference to

Making sense of MVP (Minimum Viable Product) – and why I prefer Earliest Testable/Usable/Lovable

Posted on 2016-01-25 by Henrik Kniberg

  • The customer’s basic requirement might be: „I want to get faster from A to B”.

  • The team develops a first MVP (Minimum Viable Product) e.g., a “skateboard”, which is tested by the customer.

  • His feedback “OK, but I might need something to hold on” formulates more precise requirements which again will result in the MVP of the next step.

The process goes on and on and terminates whenever the customer is satisfied with the product. The benefit of this approach is that the customer is contented with the final product – that he kind of co-developed – and the development risk is reduced, because only a minimal effort is spent in each step.


The following video summarizes the model of Henrik Kniberg: Link


What else is needed to be successful?

In order to avoid misunderstandings a clear-cut policy is required, that describes what needs to be changed, where this happens, how it will be worked out and who is in charge. It is also important to avoid unnecessary complexity, any proprietary approach or the use of sophisticated functionality of a specific tool which might lead to a lock in situation for a specific tool and jeopardizes the openness and flexibility of the solution.


Simple, common procedures and open concepts are key. Decide for elements which are proven and common sense in the competitive environment and avoid niche solutions for early adopters. For CI/CD commonly used tools and frameworks have been successfully utilized in different environments e.g., automotive, social media. These tools and frameworks can also be applied in the ISP segment. As a conceptual basis it makes a lot of sense to define the CI/CD architecture in a way to be flexible for future adaptations.


Some customers do not wish to deploy their artefacts and applications as microservices in docker based environments in the first place and rather remain with VMware. The rather decide for a stepwise approach utilizing tools for deployment or orchestration going forward.


An example CI/CD Toolchain for legacy enterprise environment.


The figure describes a typical CI/CD concept who still run their applications developed as RPMs on a Linux environment based on VM. The architecture basically utilizes the tools Jfrog Artifactory, Gitlab and Ansible Tower, which are installed to serve for several independent locations from left to right starting with development, test and acceptance and finally production.


What RPM stands for?

RPM Package Manager (RPM) (originally Red Hat Package Manager, now a recursive acronym) is a free and open-source package management system. The name RPM refers to .rpm file format and the package manager program itself. RPM was intended primarily for Linux distributions. Most RPM files are “binary RPMs” (or BRPMs) containing the compiled version of some software e.g., of an application. There are also “source RPMs” (or SRPMs) containing the source code used to build a binary package.


What are the tools used for?

Artifactory takes care of the actual artefact, namely RPMs building specific applications. It also handles the necessary OS prerequisites i.e., patch sets, or security settings organized as golden and base images. The specific configurations of the environment which allow the application to communicate with others are stored in Gitlab. Both repositories are bundled and sealed as so-called release bundles. Once available in a specific location the according Ansible Tower takes care of the deployment of the application on the intended VM.

Throughout the process of CI/CD the release bundle in the repositories of Artifactory and Gitlab are staged from left to the right, to finally arrive in the production environment.


Typically, each environment is in a different geographic location and as well in a strictly separated IP network. The IP interconnection is established using security proxies. The release bundles themselves are encrypted and strongly tied together, sealed, on their way through the CI/CD toolchain. This guarantees that any potential manipulation of the release bundle throughout the CI/CD process is identified and will not cause any harm.


Whereas artifacts, base and golden images, e.g., including more security patches, vary quite a lot during the early development process there is no further change to them in the other environments. Only the config data differs slightly as it takes respect of the different hostnames, IP addresses or configurative differences in the specific environments. In the development or testing environment there might be software simulators connected to the specific application, whereas in the acceptance area and specifically in the production area the application will cooperate with real counterparts. Thus, the quality of the delivered artifacts increases throughout the CI/CD process from left to the right. The described solution is very open and flexible, and it can even be applied whenever the service provider wishes to develop applications as microservices on a docker basis and to deploy them more elegant through an orchestration, e.g., Kubernetes. In this case Artifactory and Gitlab stay in place and Ansible Tower would be replaced by Kubernetes and / or Open shift.


The outlined concept can be completed with a couple of other tools, Keycloak for SSO and Hashicorp Vault to handle credential management and certificates.

71 views0 comments

Recent Posts

See All