Introduction:
As part of coding, developers usually create a feature branch out of the main branch and implement the required features. It involves building the code, deploying it to a local environment, and testing it manually to make sure it has no bugs. In highly complicated features, this involves more than one developer working in parallel in the same feature branch.
Developer’s challenges:
As a developer, in the pre-merge process, I have gone through this multiple times. Apart from spending a lot of effort on writing code, I have spent a lot of time populating the data in the local environment, and need to tweak the automation test scripts to run them according to environment configurations. It involves manual testing to be done with assumptions about real-time use cases, to make sure that source code will run as expected in the actual environment.
As the testing happens mostly in the local environment, the feature would usually function well. Even though the feature looks good during quick testing, this may be due to not having proper data in local infrastructure. There could also be a mismatch in infrastructure-related configuration, and so on; this may lead to breaking the actual team’s test environment, which affects continuous integration of other developer’s features. It can completely block the QA team from testing other features. A small flutter of the butterfly’s wings can have effects greater than we developers realize or intend.
Additionally, every time we do code changes this requires the developer’s effort to build and deploy the source code to the local box. Due to manual intervention, some of the environment-related configurations can go wrong. This is a process just waiting to blow up.
As multiple developers may work in parallel in a feature branch, a developer won’t have visibility on what has been done by another developer. It would be difficult to troubleshoot, if the feature doesn’t work as expected, and spot the bugs, whether it arises from other developers’ code.
In some instances, it may be required to involve the QA team to run the tests on complicated features. As the code from different developers goes into the test environment, it would be difficult to identify the features implemented and filter out the test scripts to be executed. It may result in re-running the passed test cases again and again, and it would be a never-ending process until the sprint is completed.
It requires a lot of manual configuration changes to be done in the local environment. If we think we can avoid local environment changes, it is not practical to expose the infrastructure-related information like build/deploy scripts to developers. It may lead to everyone playing around with the environment, and it may end up affecting the script quality and stability of the environment and will end up in a mess. I have seen this happen.
On the other hand, if the scripts are being controlled by the deployment team, each and every developer needs to contact the deployment team to push each and every simple feature to the dev/test environment. It will affect the productivity of both developers and DevOps engineers. It will introduce delays in deployment lead time.
Nowadays, optimizing the release process has become the new normal. As there are CI pipelines available to push the source code from the previous environment in the workflow to production, visibility of where the source code stands has begun to be clearer. It is also necessary to have the same visibility before merging the source code from the feature branch to the main branch. It involves troubleshooting the failures in an earlier stage from the feature branch itself and provides visibility to other developers and the QA team about whether the code is ready for testing, or if the feature is still under development.
ReleaseIQ’s Pre-merge (Developer) pipeline:
Based upon real customer use cases and challenges in providing the visibility of pre-merged source code, ReleaseIQ has built a pre-merge pipelines capability. It is implemented end-to-end by focussing on the developer’s needs and specific problems as they go through deploying their code to the dev/test environment. Developers need control over creating the pre-merge CI pipelines and at the same time to not have access to the merged pipelines configured in the CI tool which contains critical information. As a first step, the ReleaseIQ admin decides which CI tools can be accessed by developers, and also which job/pipeline can be accessed by developers. In this way, the jobs which have access over critical/high environments can be restricted from developers.
Developers commit their source code to source control systems (like Github, BitBucket, GitLab, SVN, Perforce) and then the ReleaseIQ pipeline immediately begins creating the build, then deploys it to the test environment and runs the necessary tests configured by the developer. Upon successful testing, developers can immediately raise pull requests, to the approver. On approval, the ReleaseIQ pipeline covering end-to-end will start to deploy to real-time environments like stage, prod etc.
Use case: Build/Deploy/Test the feature and create pull request
Pre-merge pipeline creation in ReleaseIQ
CI tool setup:
Admin/Devops can do one time set up of CI tools (Jenkins), and provide CI tools access to developers only with limited jobs which would not impact the critical environment like PROD.
Pipeline creation:
Admin can create the pipelines in their CI tools, which will have access only for dev/test environments. Developers will be given access to those jobs in CI tool settings.
Pipeline creation by Developers:
Developers can create their own customized pipelines, which can build their source code, deploy it and run tests. Once tests are successful, developers can approve their pipeline which creates pull requests and assign it to mentioned approvers.
Pipeline execution:
The Commits screen in ReleaseIQ product provides E2E visibility of pipeline execution and developers can have complete information at each commit level.
Troubleshooting:
If the pipeline fails, without CI tool access, developers will be able to access the logs and identify the bugs and fix it on their own. It doesn’t require access to Dev machines, or manual intervention of the DevOps/admin.
On a single click of the failed pipeline stage, developers will be able to root cause the issue and fix it.