Reliable responses from mock service reduce likelihood of flakey tests.
Causes of failure are easier to identify as only one component is being tested at a time.
Design of service provider is improved by considering first how the data is actually going to be used, rather than how it is most easily retrieved and serialised.
No separate integration environment(s) required to be managed for automated integration tests - pact tests run in standalone CI builds.
Integration flows that would traditionally require running multiple services at the same time can be broken down and each integration point tested separately.
Read the following:
and if you're really keen, Defect Analysis and Prevention for Software Process Quality Improvement
Research tells us that integration tests are more costly in terms of time, effort and maintenance without giving us any more guarantees.
You might all be on the same team now but:
new people join teams and they won't have all the context that your team currently has
people leave and take their knowledge with them
organisations get restructured, and product ownership changes
people are falliable, and communication doesn't always happen, despite everyone's best intent
Something to note - the team that orginally wrote Pact was "all on the same team".
Pact is like VCR in reverse. VCR records actual provider behaviour, and verifies that the consumer behaves as expected. Pact records consumer behaviour, and verifies that the provider behaves as expected. The advantages Pact provides are:
The ability to drive out the requirements for your provider first, meaning you implement exactly and only what you need in the provider.
Well documented use cases ("Given ... a request for ... will return ...") that show exactly how a provider is being used.
The ability to see exactly which fields each consumer is interested in, allowing unused fields to be removed, and new fields to be added in the provider API without impacting a consumer.
The ability to immediately see which consumers will be broken if a change is made to the provider API.
When using the Pact Broker, the ability to map the relationships between your services.
See https://github.com/pact-foundation/pact-ruby/wiki/FAQ#how-does-pact-differ-from-vcr for more examples of similar technologies.
OpenAPIs and Pact are designed with different ends in mind. The differences can be summarised below:
The Swagger / OpenAPI specification aims to standardise the description and structure of an API. It can tell you what APIs are available and what fields/structure it expects and can generate documentation/UI to interact with one. What it is not, is a testing framework.
Pact on the other hand, is essentially a unit testing framework using specification by example. It just so happens that to be able to run those tests on the API consumer and provider side, it needs to generate an intermediate format to be able to communicate that structure - this is the specification.
In fact, the authors of the OpenAPI specification predicted such use cases by announcing:
Additional utilities can also take advantage of the resulting files, such as testing tools. Potentially, for example, we could use vendor extensions to document this extra metadata that is captured in our spec. This is one way the two projects could come together.
If you are using Swagger, consider using Swagger Mock Validator, a plugin developed at Atlassian that aims to unify these worlds.
Using in combination with Pact gives you confidence your API meets any published specification (for external clients), whilst giving you the confidence that any known consumer requirements (internal) are satisfied.
Where Pact will really give you an advantage over using Swagger alone is when it comes to making changes to your API. Pact allows you to see the impact of making a change to an API within minutes, and gives you a concrete list of which teams to talk to, and what to discuss. Your tests will show you when the new functionality has been adopted by each consumer, and when any old functionality can be removed. On the other hand, releasing a new version of a Swagger specification and waiting for all the consumer teams (who may or may not actually use that particular feature) to respond to those changes could take weeks or months.
There are a few key problems with end-to-end (E2E) testing:
E2E tests are slow - slow build times result in batching of changes. Batching is bad for Continuous Delivery
E2E tests are hard to coordinate. How do you ensure the exact correct versions of all software components are exactly as they should be?
E2E complexity is non-linear - it gets harder and messier over time.
Why should you care about how other systems behave
The litmus test is this: if you can look someone straight in the eyes, and say that you don't spend a lot of time maintaining E2E environments or have constant challenges managing the tests, then it's time for another approach. If you have one or more people dedicated to managing release processes, this is probably a good sign you are heading in the wrong direction.
If you really want to hang onto these, consider pushing a subset of your E2E scenarios further down your pipeline as a type of "Smoke Test", running just a few key scenarios prior to releasing to customers.
NOTE: Obviously, there is an element of not wanting to throw the baby out with the bathwater here. Please factor accordingly
See "but I already have an E2E integration suite that runs for an hour?". All of the problems still exist, but Docker numbs the pain (or defers it).
Then you are probably developing for many consumers, am I right? If you don't know who these consumers are going to be, then Pact may not be for you. If you have control over any of them, then Pact could be a good fit - you just won't be driving the design from the consumer (that's OK too).
Good, you shouldn't. You should evaluate Pact on a smaller project to prove its worthiness before downing the Kool-aid.
In fact, you don't even have to use Pact to implement contract testing and gain the glorious benefits - Pact just makes it easier.
Are you just saying that so we don't feel bad?
Here are some suggestions to win them over: