What are the different types of test environments? What are the differences between persistent and ephemeral deployment models? Learn all about the subject in our free guide.
Test environments are critical for enabling development teams to rigorously test new code and features before they are pushed to a production environment. These environments facilitate a critical function of the software development lifecycle (SDLC). There can be many variations of test environments, each with its own requirements for network and software configurations and specific testing methods.
There are currently two primary models in use for test environments:
While a persistent environment has been the standard approach in software development thus far, technology is now available that enables rapid, robust testing in ephemeral environments.
This article will explore essential test environment types, highlight the challenges in implementing successful test environments, and recommend best practices.
There are three essential environment types:
Each of these will be described in more detail in the sections that follow.
Developers first write new code on their local computers and conduct unit testing to ensure that these blocks of code will function as specified by requirements. After passing verification, the new changes are deployed to a test environment, which could be persistent or ephemeral.
This is the first space where each developer’s features could potentially be tested together, although this isn’t required. Typically, these environments will be configured with essential frontend and backend services but not set up to make API calls to vendors or external systems. Since vendors usually charge based on the number of transactional calls made to their APIs, this can quickly become expensive and is not necessary until integration testing is conducted, so it is not done at this stage.
A huge benefit of development environments is that developers can deploy their code and make environmental configuration changes without worrying about affecting business users. There can also be multiple development environments so that the engineers can focus on their own features and not worry about conflicting code written by other developers.
QA environments are typically configured similarly to development environments. However, they need to have fully operational databases set up to support test case verification.
These are the specific objectives for QA environments:
There are a couple of best practices that teams can follow to help meet these objectives.
Functional, end-to-end (E2E), and interface-to-database testing are performed in QA environments. In these spaces, all new features from each developer have been merged into a common branch, so they can be tested together.
There is the potential that an engineer’s code doesn’t function as it did in that individual’s development environment. If there are code dependencies between features, an engineer might discover that another developer’s code isn’t working correctly or simply that a bug has been introduced. Engineers must check with each other that there are no conflicting changes, resolve dependencies, and ensure that their code still satisfies the system requirements.
If there are multiple teams using persistent QA environments, they could run into queuing issues and missed deadlines. Each team’s features might need to be tested separately, so one team will have to wait for the other to finish so that they can start.
Ephemeral environments resolve this issue by creating a full-stack environment with each pull request and then tearing it down when the pull request is merged into the master branch. Using this system, teams no longer run the risk of accidentally testing their changes against each other, and there is no need to wait for another team to finish working in a QA environment.
At this point in the development process, developers are likely to have tested their code by writing unit tests that manually insert data inputs into their code. To handle real, production-like data, they must test with data that accurately reflects the composition of the production data with which the application will interact. Each piece of data should be checked to make sure that it meets certain validation criteria, such as character length, allowable characters, and data type. Depending on the industry, this data may need to be anonymized and not consist of any actual production data, the use of which could lead to privacy violations, including HIPAA noncompliance.
It is also critical to verify that the data is stored properly and efficiently. An analysis must be done to select the appropriate database type based on the data and information the application will be handling. The database should be configured with the appropriate columns and data types. It must also be updated correctly and efficiently, either through ETL jobs or transactional API calls. If databases aren’t updated quickly and consequent transaction calls are made, there is a possibility that the data will not be the most up to date.
Staging environments enable developers to see how well the application will perform in production. They permit new code to be put to the most rigorous of all tests. End-to-end testing, user testing, load testing, and, if necessary, mobile testing are all performed in this environment. For all of this testing to be possible, all services must be enabled. Every frontend, backend, and microservice must be configured and available, and API calls to external vendor systems must be turned on and functional.
The following best practices can help implement successful staging environments.
If business stakeholders are involved in the project, make sure they are involved and have tested and signed off on the new features. They might catch something a developer might have missed, allowing these bugs to be quickly remediated before they are pushed to the production environment.
An incredibly important purpose of a staging environment is to discern an application’s ability to handle production-like traffic and volume. An analysis should be done on the number of users and how much data the software is expected to interact with. Set realistic benchmarks for how the system should perform and then throw the maximum potential volume at it. Make any necessary adjustments and then test again. Developers may need to refine some of their newly developed code or even add infrastructure resources to meet established uptime and availability standards.
Developers should also check if the application is making any unnecessary API calls. Verify that the application workflow is designed to be as efficient as possible. Perform multiple rounds of load and performance tests to ensure that the application will succeed in production. Some teams might even explore the use of microservices, if they are not already in use.
Microservices are loosely coupled services that can be managed, updated, and supported independently. Compared to a monolithic repository, where all services are tightly coupled and completely dependent on each other, a microservices architecture allows an application to maintain overall availability if one service goes down. Engineers can identify exactly what service failed, create and deploy a fix to that service, and restore full functionality, all without having to deploy changes to the entire application.
Logging, transaction tracing, and metrics are crucial for gauging the success of a production release. For a complex system, these may appear to be daunting to implement, but without them, there is no way to understand the successes and failures of an application. Lack of observability can create many headaches during issue triaging and production support.
Here are some best practices to follow in this area.
With complete and thorough logs, developers can monitor any issues in QA and staging environments and quickly resolve production issues. Developers can track a transaction through the whole stack and pinpoint the bug. Logging also enables technical and business users to identify exact failure points and provide definitive information to developers for reference when modifying their code.
Metrics allow present and historical trends to be observed. Engineers can track the availability and uptime of test environments while also being able to look back and identify any potential infrastructure or software issues that may be creating bottlenecks in the development process.
For example, an application might use two containers to load data from an ETL job into a new database. Metrics could show that this process takes an average of two hours to run, but stakeholders need it to run in one hour. Developers could increase the container count to four to meet the one-hour timeframe.
With the persistent environment model, each of the aforementioned environment types is configured and deployed with the intention of existing indefinitely. With ephemeral environments, each cloud environment is created and torn down with each pull request and the merging of a pull request to the feature branch.
One of the many advantages of using ephemeral environments is that the business does not need to maintain and pay for each persistent environment. They also enable developers to catch bugs and test their changes in fully configured environments, eliminating the need to work through each test environment that may not have all databases and services available. Persistent environments may not be fully configured until reaching a higher-level QA or staging environment.
Additionally, ephemeral environments eliminate the need to take other development teams’ testing timelines into consideration. Engineering teams won’t need to designate “environment ownership” and won’t risk delaying their project delivery dates. Ephemeral environments definitively increase software development velocity.
The table below describes explicit differences between ephemeral and persistent environments.
After evaluating each type of test environment, it’s important to ensure that your business has the necessary processes in place to enable a successful production implementation. Here are a few questions to consider when thinking about your test environments:
Answering these questions will allow you to refine and enhance a very important piece of your software development lifecycle.
Your development teams may benefit from reorganizing their environment structure. Using the ephemeral environment model might streamline your testing and deployment process.
Uffizzi offers an open-source solution as well as a Saas solution, Uffizzi Cloud. Engineering teams can quickly set up an integration with GitHub to see if they may benefit from this testing model, or they can use the open-source solution to add ephemeral environments to their own infrastructure.