Hi all. Today we are sharing with you the final part of the article.
Deployment testing
This style of testing is a powerful approach, it allows us to do white box testing to check the internals of how our infrastructure code works. However, it somewhat limits what we can test. The tests are performed based on the in-memory deployment plan created by Pulumi before the actual deployment and therefore the deployment itself cannot be tested. For such cases, Pulumi has an integration test framework. And these two approaches work great together!
The Pulumi integration testing framework is written in Go, and it is with it that we test most of our internal code. If the unit testing approach discussed earlier was more like white box testing, then integration testing is a black box. (There are also options for extensive internal testing.) This framework was created to take a complete Pulumi program and perform various lifecycle operations on it, such as deploying a new stack from scratch, updating it with variations, and deleting it, possibly multiple times. We run them regularly (for example, at night) and as stress tests.
(We
By running the program with this framework, you can check the following:
- Your project code is syntactically correct and works without errors.
- The stack and secrets configuration settings work and are interpreted correctly.
- Your project can be successfully deployed to the cloud provider of your choice.
- Your project can be successfully upgraded from the initial state to N other states.
- Your project can be successfully destroyed and removed from your cloud provider.
As we will see shortly, this framework can also be used to perform runtime validation.
Simple integration test
To see this in action, we'll take a look at the repository pulumi/examples
, as our team and the Pulumi community use it to test their own pull requests, commits, and nightly builds.
Below is a simplified test of our
example_test.go:
package test
import (
"os"
"path"
"testing"
"github.com/pulumi/pulumi/pkg/testing/integration"
)
func TestExamples(t *testing.T) {
awsRegion := os.Getenv("AWS_REGION")
if awsRegion == "" {
awsRegion = "us-west-1"
}
cwd, _ := os.Getwd()
integration.ProgramTest(t, &integration.ProgramTestOptions{
Quick: true,
SkipRefresh: true,
Dir: path.Join(cwd, "..", "..", "aws-js-s3-folder"),
Config: map[string]string{
"aws:region": awsRegion,
},
})
}
This test goes through the basic life cycle of creating, modifying, and destroying a stack for a folder aws-js-s3-folder
. It will take about a minute to report the passed test:
$ go test .
PASS
ok ... 43.993s
There are many options to customize the behavior of these tests. See the full list of options. ProgramTestOptions
. For example, you can configure a Jaeger endpoint to trace (Tracing
), indicate that you expect the test to fail during negative testing (ExpectFailure
), apply a series of βeditsβ to the program for successive state transitions (EditDirs
) and much more. Let's see how to use them to test the deployment of an application.
Checking Resource Properties
The integration discussed above ensures that our program "works" - it doesn't crash. But what if we want to check the properties of the resulting stack? For example, that certain kinds of resources were (or were not) prepared and that they have certain attributes.
Parameter ExtraRuntimeValidation
for ProgramTestOptions
allows us to look at Pulumi's post-deployment state so that we can do additional checks. This includes a complete snapshot of the resulting stack, including the configuration, exported output values, all resources and their property values, and any dependencies between resources.
To see a basic example of this, let's check that our program creates one S3 bucket:
integration.ProgramTest(t, &integration.ProgramTestOptions{
// as before...
ExtraRuntimeValidation: func(t *testing.T, stack integration.RuntimeValidationStackInfo) {
var foundBuckets int
for _, res := range stack.Deployment.Resources {
if res.Type == "aws:s3/bucket:Bucket" {
foundBuckets++
}
}
assert.Equal(t, 1, foundBuckets, "Expected to find a single AWS S3 Bucket")
},
})
Now, when we run go test, it will not only go through the battery of lifecycle tests, but also, after the stack is successfully deployed, it will perform additional checks on the resulting state.
Runtime tests
So far, all tests have been purely about deployment behavior and Pulumi's resource model. What if you want to verify that your provisioned infrastructure actually works? For example, that the virtual machine is running, the S3 bucket contains what we expect, and so on.
You may have already guessed how to do it: option ExtraRuntimeValidation
for ProgramTestOptions
is a great opportunity to do so. At this point, you run an arbitrary Go test with access to the full state of your program's resources. This state includes information such as virtual machine IP addresses, URLs, and anything else needed to actually interact with the resulting cloud applications and infrastructure.
For example, our test program exports the property webEndpoint
bucket called websiteUrl
, which is the full URL where we can get the configured index document
. Although we could dig through the state file to find bucket
and read that property directly, but in many cases our stacks will export useful properties like this that we're comfortable using for validation:
integration.ProgramTest(t, &integration.ProgramTestOptions{
// as before ...
ExtraRuntimeValidation: func(t *testing.T, stack integration.RuntimeValidationStackInfo) {
url := "http://" + stack.Outputs["websiteUrl"].(string)
resp, err := http.Get(url)
if !assert.NoError(t, err) {
return
}
if !assert.Equal(t, 200, resp.StatusCode) {
return
}
defer resp.Body.Close()
body, err := ioutil.ReadAll(resp.Body)
if !assert.NoError(t, err) {
return
}
assert.Contains(t, string(body), "Hello, Pulumi!")
},
})
Like our previous runtime checks, this check will be executed immediately after the stack is raised, all in response to a simple call go test
. And this is just the tip of the iceberg - all the Go test features that you can write in code are available.
Continuous Infrastructure Integration
It's good to be able to run tests on a laptop when a lot of changes are made to the infrastructure, to validate them before submitting them for code review. But we and many of our clients test the infrastructure at various stages of the development lifecycle:
- In every open pull request for a pre-merge test.
- In response to each commit, to double-check that the merge was done correctly.
- Periodically, such as at night or weekly for additional testing.
- As part of performance testing or stress testing that is typically run over a long period of time and runs tests in parallel and/or deploys the same program multiple times.
For each of them, Pulumi supports integration with your favorite continuous integration system. With continuous integration, this gives you the same test coverage for your infrastructure as it does for your application software.
Pulumi has support for common CI systems. Here are some of them:
For more information, please refer to the documentation for
Ephemeral environments
A very powerful feature that opens up is the ability to deploy ephemeral environments solely for the purpose of acceptance testing. Concept
If you are using GitHub then Pulumi suggests
When using Pulumi for your core acceptance tests, you will have new automation options that will improve team productivity and give you confidence in the quality of your changes.
Π‘onclusion
In this article, we have seen that when using general purpose programming languages, many software development techniques become available to us, which were useful in developing our applications. They include unit testing, integration testing, as well as their interaction to conduct extensive runtime testing. Tests are easy to run on demand or on your CI system.
pulumi - open source software, it's free to use and works with your favorite programming languages ββand clouds -
β
Source: habr.com