Pardon Testcases
Tests
Tests build on top of collections and use additional “sequence” https files. These are a similar format but used as scripts for running a sequence of request/responses, (rather than as templates for a single request response.)
Pardon’s test system executes trials. Trials can be organized in gamuts (similar to
how it("should", ...)
or test(...)
tests from other frameworks can be organized with describe
blocks).
To illustrate the concept, let’s consdier a test setup for confirming ping
works.
In addition to our collection, we need an overall script for configuring the test system,
a test, and a sequence to execute.
- tests/service.test.ts
- sequences/ping.flow.https
- pardon.test.ts
- …
- pardonrc.yaml
- collection/…
These files
- define the trials/sequences to run, and
- configure the test environment
The trials have parameterized names. The name specifies where the output of each trial execution should go, they have to be unique.
Here we’re using %env/ping
so we can run this trial in multiple envs at once.
Also, the report output will go into directories according to the environment they were run in.
Here we define that we expect ping
to reply with pong
.
The context lists what environment
values should be picked up to configure the request.
(env configures whether this ping should be configured for stage
or prod
, or perhaps local
).
Configures the test overall, this uses Pardon’s testcase generation framework and generates the initial set of cases for pardon to run.
(defi
here is defaulting running tests with env
set to both stage
and prod
, but also allows local
)
Sequences: Units and Flows
Pardon’s test engine is built to flow data as naturally as possible: Tests accept a single argument, the “environment”, which is used/modified by executing sequences of requests.
Units and flows are both seqeunces, pardon considers units as idempotent.
The syntax for unit and flow files is basically the same as the collection endpoint and mixin https files, but initial yaml schema and the overall semantics are different.
Unlike an endpoint or mixin, the requests and responses here are not “alternatives for making a single request”, but rather a list of requests to make (with control flow and retry limits included).
So this flow specifies the following steps.
Additionally we can name responses…
Naming requests to script sequences
We can name requests, too. If there’s a request name that matches an outcome name we run that request next. An unnamed outcome goes to the “next” request in the list. We can also specify retry limits and delays along with requests and responses, respectively.
As an example, some services expose async resources that we need to poll for when they become ready. This example specifies two requests and their respective response matching behavior
This sequence starts with a POST
request, matching the response status
to done
or processing
, in the case of done the outcome is success
and
as there’s no request named success
we are done.
If the outcome is waiting
, we wait 10 seconds, and then proceed to run the GET request:
again we match the status of the result, waiting 5 more seconds
for each processing
response.
Because the GET request is named waiting / 10
, we would poll the resource at most 10 times before giving up.
Using sequences in sequences
Each sequence is run sequentially, but they can rely on any number of sequences to run before, and these are run in parallel.
Imagine we want to find pre-defined products named "pen"
and "pencil"
, we could
define a unit that searches the products list and returns the id as product
in the environment.
We could define a find-product.unit.https
sequence. Making it a unit
means that the result (given the inputs) would be cached/reused across
tests in the test run.
Using this sequence we can define a flow to order pens and pencils.
First we would use
the find-product.unit
twice and map
the response product id value, and then we can use those values.
The pardon test runner would execute the find-product.unit
calls
in parallel.
Parameterized Testcases
Suppose we need to run tests for different products, we can
use cases
/ set
to apply some values to the environment
the test starts with.
This updates the initially single initial test environment.
To run 3 cases, instead of defining three separete tests, we can use each
to generate three environments instead: The each
operator here forks
the current environment, defining an environment-per-product (multiplying the
number of trials defined.)
If we would like to configure production tests as well,
we can use run the tests with stage
and prod
each,
(this expands our 3 tests into 6, with a single change)!
As you can see, the test cases generated are a “cartesian product” of the sets in
each each
.
We should explore this behavior interactively.
BTW, each
can be used outside multiple set()
calls,
or applied to values, and it’s easier to type here.
Try applying each to the env
value as well.
We can also use object syntax for the set
call, to
set multiple key-value pairs together.
Feel free to use this to explore.
TMTOWTDI
(there’s more than one way to do it)
The testcase methods can be called multiple ways, to support a broad range of expressions.
For instance, each
and set
have multiple ways they can be called:
Set can be called with an object to assign multiple values at once.
We can also pass lambdas to each
, evaluating the environment across different
paths.
We can use each()
as a “value” passed to set
, also
so these are also equivalent
However to get the combination of values we had before using a single set
we
need to introduce robin
, which evaluates once but in a round-robin fashion.
In contrast, two each
statements will produce 4 results:
The full set of test case helpers are
set
- applies values to the environment.def
- applies defaults to the environment.defi
- applies default to a single value and filters unknown values.unset
- removes values from the environement.each(...)
- forks the environment by being different values or via different actions.repeat(n)
- clones the environment into n copies.robin
- behaves differently each successive evaluation (as a value or as an action).fi
-fi
lter orif
(backwards), can introduce conditionals or select environments.stop(...)
- essentiallyfi(...).then(if(false))
fun
- defines afun
ction, can be called withexe
. Functions can be used as actions or values.exe
-exe
cutes afun
ction.counter
- creates a counter that increments each evaluation.format
- builds a format string using the current environment.unique
- filters out duplicate environments.local
- creates a local context:local(...).export(...)
evaluates actions inexport
without producing the values fromlocal
.skip
- discards the first n environments.take
- takes n environments (discards the rest).smoke
- skips environments that are semi-redundant according to some criteriashuffle
- shuffles the order of the testcases, usually used with smoke.sort
- sorts environmentsdebug
- prints the “current” environment(s).
Configuring Trials
A trial is registered with a parameterized name. (The name is used as an output
path for the test report.). A test file might be structured as a cases
+ trial
.
which declares the single trial stage/get-product-pencils
,
with the environment { env: "stage", name: "pencils" }
.
Let’s experiment with this structure to get a feel for it.
Here we can see the environments which would be passed into each named trial.
Again, we expand the cases per environment.
Conditional Configuration
We can omit particular configurations with fi()
(read as either filter, or backwards if
) and stop()
.
Starting with our 6 cases, we can add stop({ env: "prod", name: "pens" })
to remove
the production test cases involving pens
.
We’re not selling product pens
until testing is complete in stage
,
so let’s add a stop
command which discards environments which match
the values provided.
Observe that the prod
+ pens
testcase is no longer shown.
Trials
Our service testing is not likely going to be so uniform that we want a single
cases
configuration for the entire test run.
Often we’ll have our environment config in one place (actually we can move this to a common config location), and we can define our trials in gamuts.
This gives us tools to organize the case configs hierarically.
There are many utilities defining transforms of the test case(s):
fi(...).then().else()
andstop
for conditionals and termination.def
anddefi
to default and default-filter values.counter
and round-robin
for producing varying data.smoke
for covering a smaller a subset of cases, often preceeded withshuffle
to remove symmetries.uniq
for removing duplicate cases.local().export()
for evaluating with temporary values.fun
andexe
for defining functions/values.format
for formatting identifying values.- etc…
Sometimes we have a large number of tests defined and we don’t want to burden the system by running all of them all at once all the time.
We can cut down a large test into a smaller “smoke test” by specifying which things we want to cover. Pardon can pseudo-randomized filter down to a covering set of tests.
Smoke tests are specified with a “key-list”, an optional “per-list”, and an optional “shuffle” seed.
The key list looks like one of these… (comma-separated optional max count)
a,b:2,c
meaning we’re smoking a maximum count of{a:1, b:2, c:1}
, or7,a,b,c:2
meaning a maximum count of{a:7, b:7, c:2}
with a default of 7.
We’re counting the number of times we’ve seen the value of a
, b
, or c
so far,
and smoke skips tests when we’ve exeeded all of them.
The optional “per list” looks like
%x,y,z
meaning the count is segregated per values ofx,y,
andz
.
And the shuffle is just ~2
or some other number, with a value of ~0
for not applying the shuffle.
Understandably, this is best understood by experience:
If no %per
specification is present, the default is per-trial, you can also
use %trial
to be ex.
Try setting the smoke specification to…
name
- at least one test pername
,name%trial
- at least 1 tests for eachname
, counted perenv
(note how this distributes better than2,name,env
above).