This blog describes some of the best practices we have seen over the years in developing, building, testing, and deploying software. Most engineering organizations have at least a basic understanding of the value proposition of a DevOps pipeline. However, engineering leadership may focus too heavily on simplicity, opting to go for the “one stop shop” approach, instead of picking and choosing the best of different tools. I emphasize that engineering teams should be free to pick and choose the best tools for a given use case, rather than going all-in on a one-stop-shop solution.
Release velocity is one of the most crucial metrics to focus on for a high-performing DevOps organization. Companies that can iterate quickly, releasing more features and new products, are more competitive and more successful in their respective markets. Engineering leaders who want to improve their team’s release velocity and overall release quality should pay careful attention to the tools and services being utilized. Empowered DevOps teams choose the tools best suited to support delivering critical workloads. Tools like Jenkins may already be in place to provide a solid CI/CD foundation. Augmenting Jenkins with additional capabilities can still provide a sizable ROI. Teams that are looking for a place to start should follow the path of the software development lifecycle: going “left to right”.
Planning and Design
At the far left of the SDLC is the planning and design phase. While teams may overlook this in the context of the “pipeline”(think CI/CD), planning and designing is the foundational piece of a finished product or feature.
DevOps is the marriage of development and operations teams, but the planning and design phase is the inflection point where product and engineering teams meet. Feature requests and product requirements transform into living, breathing code. There are still powerful tools and services that automation teams can employ in this phase to improve the overall pipeline. Tools like Trello and Jira help manage ticketing and work items, offering Kanban boards and other Agile tooling. For architecture and system design, services like Draw.io and LucidChart provide a capable set of design tools. Key is the current emphasis on cloud-native and cloud-first architecture. Improved planning and design will lead to better requirements, which are the primary inputs into the next phase of the pipeline.
Committing Code and Testing
When development teams start writing code is where the “rubber meets the road”. The requirements and feature requests that were developed during the planning and design phase start to take rough shape. The overall software quality, and critically, security, are heavily influenced by the quality of work in this phase. Untested, inefficient, and insecure code will lead to a “garbage in, garbage out” scenario: production environments will be more susceptible to outages and compromise, regardless of operational tooling and monitoring.
At a minimum, development teams should use some form of version control system(VCS). Version control provides a centralized repository that tracks changes and authorship for code. A VCS like Git or Subversion enables teams of developers to contribute to the same codebase in parallel, contributing changes without impacting or overwriting prior or ongoing work. A version control-enabled codebase is so critical to modern DevOps workflows that it seems almost redundant to mention. There are still organizations that do not make use of it. It’s not surprising that “Codebase ” is the first principle listed in the Twelve-Factor App philosophy.
Before committing code to a VCS like Git, the tooling team should provide developers with the tools to help enforce and guide coding standards. A VCS is simply a repository of code, good or bad. Simple workflows are enabled at the individual developer level that will improve the overall quality of organizational application code. Most integrated development environments(IDE) like VSCode and PyCharm include linters. These linters are specialized tools that highlight basic logic and syntax errors in the code, and in some cases can suggest and correct fixes. Pre-commit hooks can also be utilized. These simple scripts and automation perform further linting and testing around code and code quality prior to submission for review.
In the course of software development, more comprehensive, in-depth tooling is typically required to fully test code for functionality, potential bugs, and susceptibility to security issues and compromise. Static analysis tools can analyze and evaluate code without requiring it to be running as a live application. Tools like SonarQube can be integrated into the developer IDE, or deployed as part of a CI/CD pipeline with tools like Jenkins.
Code written in languages like Java or C++ is compiled into a binary before being integrated and deployed into live environments. In legacy environments, software was often compiled on individual developer machines before being uploaded. In larger, modern environments that model no longer scales. A centralized build system provides homogenous configuration and ensures build artifacts adhere to standards before being pushed into a CI/CD pipeline. Build tools like Maven, Gradle, and Bower are popular choices, and integrate well with most CI/CD infrastructure.
Integration and Deployment
Once a feature or application is finished, the completed build artifact is then integrated and deployed to the live environment. Once it reaches production, the “value” is realized. Value realization is a result of customer interaction with the new feature or service. Consequently, it is critically important to make sure that production workloads are tested, deployed, and monitored with the right tooling.
The core piece of infrastructure for integration and deployment is the Continuous Integration/Continuous Delivery(CI/CD) pipeline. Replacing legacy software deployment methods like FTP, CI/CD pipelines provide a holistic automation platform, encompassing build/compilation, testing, integration, and deployment in a single interface. CI/CD pipelines form the backbone of almost any environment that adheres to DevOps principles. There is a broad selection of CI/CD software available: SaaS tools like TravisCI, CircleCI, and AWS CodeDeploy, as well as self-hosted solutions like Jenkins and Spinnaker. Container solutions like Docker and Kubernetes can provide immutable build artifacts, further enhancing the functionality of CI/CD architecture.
CI/CD pipeline capabilities extend beyond software deployment; the underlying infrastructure can be defined and deployed using code as well. Tools like Ansible, Chef, and Puppet enable DevOps engineers to define the configuration of applications and services in code, automatically applying them during deployments, and maintaining minimal configuration drift. For infrastructure, tools like Terraform, Cloudformation, and recently Pulumi can be employed to define and control the provisioning of resources like compute nodes, databases, and even entire networking zones. Teams that integrate configuration management and infrastructure-as-code tools in their CI/CD workflows have end-to-end deployment and release automation systems, which allow for faster iteration and feature delivery.
Once production workloads are live, choosing robust operational tooling is key to ensure that the customer experience remains positive and that any issue or performance degradation provides immediate, actionable feedback. The modern ecosystem of highly available, highly performant customer-facing applications has given rise to a landscape of cloud and web-focused monitoring and operational services. Tools like DataDog, AppDynamics, and New Relic can provide a granular look into the health of application infrastructure. Log aggregation and searching platforms like Elasticsearch enables critical application data to be found from the vast sea of information generated by modern applications.
Utilize Diverse Tooling to Build a Robust DevOps Pipeline
The modern DevOps landscape offers engineering teams a broad selection of tools at every stage of the SDLC. Rather than going all-in on one vendor or toolchain, teams should be empowered to pick and choose the best functioning and best fitting tool or service for their use case.
Each step in the SDLC is important, even before the first line of code is written, thus the importance placed upon picking the best tools at each stage. Once teams have settled on their pipeline tooling, the next key focus should be a unified way to manage and monitor the complexity of a diverse toolchain.
Learn more about Enterprise DevOps platforms with this white paper.