The art of writing good consumer Pact tests is mostly about knowing what not to test. Getting this right will make the difference between Pact tests that are lightweight and helpful, and Pact tests that make you wish you'd stuck with integration testing. Your Pact tests should be as loose as they possibly can be, while still ensuring that the provider can't make changes that will break compatiblity with the consumer.
Pact tests should be "unit tests" for your client class, and they should just focus on ensuring that the request creation and response handling are correct. If you use pact for your UI tests, you'll end up with an explosion of redundant interactions that will make the verification process tedious. Remember that pact is for testing the contract used for communication, and not for testing particular UI behaviour or business logic.
Usually, your application will be broken down into a number of sub-components, depending on what type of application your consumer is (e.g. a Web application or another API). This is how you might visualise the coverage of a consumer Pact test:
Here, a Collaborator is a component whose job is to communicate with another system. In our case, this is the
OrderApiClientcommunicating with the external
Order Api system. This is what we want our consumer test to inspect.
Functional testing is about ensuring the provider does the right thing with a request. These tests belong in the provider codebase, and it's not the job of the consumer team to be writing them.
Contract testing is about making sure your consumer team and provider team have a shared understanding of what the requests and responses will be in each possible scenario.
Pact tests should focus on
exposing bugs in how the consumer creates the requests or handles responses
exposing misunderstandings about how the provider will respond
Pact tests should not focus on
exposing bugs in the provider (though this might come up as a by product)
You can read more about the difference between contract and functional tests here.
The rule of thumb for working out what to test or not test is - if I don't include this scenario, what bug in the consumer or what misunderstanding about how the provider responds might be missed. If the answer is none, don't include it.
Avoid the temptation to make assertions about general business rules that you know about the provider (eg. the customer ID is expected to be in the format
[A-Z][A-Z][A-Z]\-\d\d\d). Only make assertions about things that would affect your consumer if they changed (eg. a link must start with
http because your app is expecting absolute URLs, and would error if it received a relative one). This allows the provider to evolve without getting false alerts from unncessarily strict pact verification tests.
as a mock (calls to mocks are verified after a test) not a stub (calls to stubs are not verified). Using
Pact as a stub defeats the purpose of using
for isolated tests (ie. unit tests) of the class(es) that will be responsible for making the HTTP calls from your
Consumer application to your
Provider application, not for integrated tests of your entire consumer codebase.
carefully, for any sort of functional or integrated tests within your consumer codebase.
If you use
Pact with exact matching for tests that cover multiple layers of your application (especially your UI), you will drive yourself nuts. You will have very brittle
Consumer tests, as
Pact checks every outgoing path, JSON node, query param and header. You will also end up with a cartesian explosion of interactions that need to be verified on the
Provider side. This will increase the amount of time you spend getting your
Provider tests to pass, without usefully increasing the amount of test coverage.
If you use Pact for your UI tests you will likely end up with:
consumer tests that are very hard to debug because you will be setting up multiple interactions on the mock server at a time, and potentially using multiple mock servers at a time.
multiple redundant calls to the same endpoint with slight variations of data that increase the maintenance required, but don't helpfuly increase the amount of test coverage of your API.
Ideally, your Pact tests be scoped to cover as little consumer code as possible while still being a useful exercise (ie. don't just test a raw HTTP client call), and use as few mocked interactions at a time as possible.
A better approach than using Pact for UI tests is to use shared fixtures, or the generated pact itself, to provide HTTP stubs for tests that cover all layers of your consumer. Following the "testing pyramid" approach, most of the tests for your UI components should be isolated tests anyway, and tests covering the full stack of your consumer should be kept to a minimum.
Keep your isolated, exact match tests. These will make sure that you’re mapping the right data from your domain objects into your requests.
For the integration tests, use loose, type based matching for the requests to avoid brittleness, and pull out the setup into a method that can be shared between tests so that you do not end up with a million interactions to verify (this will help because the interactions collection in the
Pact acts like a set, and discards exact duplicates).
If you don’t care about verifying your interactions, you could use something like Webmock for your integrated tests, and use shared fixtures for requests/responses between these tests and the
Pact tests to ensure that you have some level of verification happening.
See Sharing pacts between
Provider for options to implement this.
Do not hand create any HTTP requests directly in your
Consumer app. Testing through a client class (a class with the sole responsibility of handling the HTTP interactions with the
Provider) gives you much more assurance that your
Consumer app will be creating the HTTP requests that you think it should.
Sure, you’ve checked that your client deserialises the HTTP response into the Alligator class you expect, but then you need to make sure when you create an Alligator in another test, that you create it with valid attributes (eg. is the Alligator’s last_login_time a Time or a DateTime?). One way to do this is to use factories or fixtures to create the models for all your tests. See this gist for a more detailed explanation.
Each interaction is tested in isolation, meaning you can’t do a PUT/POST/PATCH, and then follow it with a GET to ensure that the values you sent were actually read successfully by the
Provider. For example, if you have an optional
surname field, and you send
lastname instead, a
Provider will most likely ignore the misnamed field, and return a 200, failing to alert you to the fact that your
lastname has gone to the big
/dev/null in the sky.
To ensure you don’t have a Garbage In Garbage Out situation, expect the response body to contain the newly updated values of the resource, and all will be well.
If you can't include the updated resource in the response, another way to avoid GIGO is to use a shared fixture between a GET response body, and a PUT/POST request body. That way, you know that the fields you are PUTing or POSTing are the same fields that you will be GETing.