PyTest Fixtures for Fun and Profit

Ryan Collingham
Analytics Vidhya
Published in
6 min readApr 26, 2020

--

Photo by Chris Ried on Unsplash

In my previous article, I talked about why PyTest is a great tool for functional testing any application and the different ways to integrate it into your development environment. Now, I’m going to take a deeper dive into one aspect of PyTest that is essential for writing good functional tests: fixtures. Fixtures are modular tool for setting up and tearing down a testing environment and using them correctly will allow you to structure your test code better and test more with less code.

Setup and teardown using fixtures

As a concrete example, I’m going to use a (very) simple app written in Go, which starts an HTTP server process. The reason I’m using Go is to show how PyTest can be used to test applications written in any language, not just Python.

A super basic HTTP server, written in Go

Now, I’m going to write a test for this app using PyTest. The runtime state of the app is encapsulated in a class which exposes start() and stop() methods. A single testcase starts up the app, makes an HTTP request, asserts on the status and content of the response, then stops the app.

The starting point for testing our app

On the surface this code might look entirely reasonable, but it actually has a serious issue. What will happen if I change my app to return a different text that breaks the assertion on line 49? Well, the test will break of course — that is what we want to happen. But that’s not all: since the assert failed, the rest of the testcase will not execute which means that the stop() code for our App is never run. If you try it yourself, you’ll see that the app process remains running, even after the python process has exited.

ps -ef | grep '\./app'
501 1656 1 0 12:54pm ttys001 0:00.01 ./app

Even when tests fail, they should never leak resources like processes, threads, file handles or sockets. Not only are they a waste of your system resources, leaked resources can also cause subsequent tests to fail in unexpected ways through interactions. For example, if the app I was testing always listens on the same port, failing to stop the app correctly at the end of the test will mean that re-running the test will always fail because the port is still in use.

To ensure that the stop() always gets called you could wrap the testcase assertions in a try...finally block, alternatively you could make the App class into a context manager or use old-fashioned setup and teardown functions to achieve the same effect. However, there is a better way: fixtures.

Factoring out the app setup into a fixture

Notice how the app fixture uses the yield keyword to pass the started App object into the test function. After the testcase has finished, regardless of whether it passed or failed, the app fixture will resume and terminate the app process. By putting the setup and teardown code for the App in the same function, it is easy to see how the teardown code corresponds to what was setup in a way you don’t get with separate setup and teardown methods. All we had to do to use the fixture in our test function was to put the fixture name as a parameter to the test function, so the code within our test function only has to deal with the actual testing being done.

Taking it further: fixture scopes

Fixtures are re-usable, so as you develop your app you can go ahead and add more tests which depend on the same app fixture. However, as you do so you need to bear in mind the scope of the fixture. By default, fixtures have “function” scope, which means that a new instance of the fixture will be set up then tore down for every function which uses it. My example app above should return the exact same response for any HTTP method, so I’m going to go and add a couple more tests for the PUT and POST methods. When I run the tests and disable stdout capturing with the -s flag, I can see that my app was started and stopped 3 times:

test_app.py::test_app Starting app
PASSEDStopping app
test_app.py::test_app_put Starting app
PASSEDStopping app
test_app.py::test_app_post Starting app
PASSEDStopping app

There is a benefit to keeping fixture scopes small — by creating a new instance for each test, you prevent side-effects to the app from other tests impacting subsequent tests. However, the drawback is that your tests will take much longer to run as you add more testcases. We can speed up our tests by changing the scope of the fixture to “module”:

Sharing the same app instance between test functions

Now, all tests in this module which depend on the app fixture will use a shared instance. PyTest will start the app process, run all our tests against it, then stop the app. Fixture scopes can also be set to “class”, “package” or “session” to apply to all tests in the same class, package or test session respectively. There are no hard-and-fast rules on which scope you should use, but as a general rule of thumb you should use the lowest level scope which allows your tests to complete in a reasonable length of time. Since our app starts up pretty fast and we only have three tests right now, using function-scope fixture is fine in our trivial example, but for a realistic complexity app along with tens or hundres of testcases, sharing the app instance at least at the module scope makes sense. If your fixture setup is particularly expensive, for example if your tests need to spin up a number of services running in separate docker containers (take a look at pytest-docker), you will probably want to leave that fixture at the session scope. Just bear in mind, you will need to be especially careful to avoid unexpected side-effects between tests that use a shared fixture instance.

Multiply your testing with parametrization

Parametrization is a very useful technique to increase your testing coverage. You may be aware that parametrization can be applied to test functions to generate multiple tests from a single template. For example, I could have simplified my earlier testing of various HTTP methods using parametrization.

Well, that’s not all: you can also parametrize fixture functions to generate multiple different fixtures from a single template. All tests that use a parametrized fixture will be run once for each parametrization of the fixture. The way you parametrize a fixture function is a little different to a test function, though the effect is the same.

Parametrizing our fixture and test

Fixtures can depend on other fixtures

Fixtures are modular, which means that fixture functions can depend on other fixtures, which can in turn depend on other fixtures etc. Fixture modularity turns out to be very powerful, as it allows you to orchestrate multiple apps and services that depend on each other into a single testing environment. A simple and common example is when you need to use a client to connect to your server process under test. By wrapping the test client in a fixture which depends on the server fixture, the client can (for example) extract the server’s listen address to allow it to connect, without relying on a hard-coded port.

Going back to our earlier example, we could create an AppClient class to store the app address and make it slightly simpler to access URLs on the server. Note how I no longer have to make the app fixture an explicit dependency of the test_app function, since it is now depended on indirectly via the app_client. Also notice how the app_client fixture has “function” scope, so a new instance will be created for every test — this is fine because initialising the client is cheap. The app fixture is still “module” scoped so only a single app instance is started.

Adding the app_client fixture

In Summary

We’ve talked about how you can use fixtures in PyTest to separate setup and teardown from your test code. On top of that, we’ve worked through an example showing how you can use fixtures to manage an app running in a separate process, the importance of fixture scope, and how you can use parametrization and modular fixtures to test more with less code.

--

--