Pardon Testcases
Pardon includes a test runner focused on parallel execution of comprehensive cases.
A single test case is called a trial. A trial is defined by three things
- a name (which must be unique).
- the starting environment.
- the function.
The trial can accept additional environmental data from the run, as well.
Trial functions can be implemented by stitching together one or more flows, along with any other validation.
Trials can be selected by name or glob pattern for debugging single trials, and subsets can be selected according to various criteria, for quicker/light-weight smoke testing of a few cases.
A basic test could simply run a flow.
import { trial, flow } from 'pardon/testing';
trial("%env/health-check", ({ env }) => { flow({ env })` >>> GET https://todo.example.com/health-check
<<< 200 `();});
This validates that the /health-check
endpoint returns 200, parameterized
by the environment.
% pardon-runner health-check.test.ts env=local "**"
Trials can be organized in surveys (similar to how it("should", ...)
or test(...)
tests from other
frameworks can be organized with describe
or suite
blocks).
You can use surveys to organize trials:
survey("%env", { trial("health-check", ({ env }) => { ... });})
The trials have parameterized names. The name specifies where the output of each trial execution should go, they have to be unique.
Here we’re using %env/ping
so we can run this trial in multiple envs at once.
Also, the report output will go into directories according to the environment they were run in.
import { trial, pardon } from 'pardon/testing'
trial("%env/health-check", () => flow("./health-check.flow.https"));
Here we define that we expect ping
to reply with pong
.
The context lists what environment
values should be picked up to configure the request.
(env configures whether this ping should be configured for stage
or prod
, or perhaps local
).
context: - env - port? # support overriding the port for env=local>>>https://example.com/ping
<<<200 OK
pong
Configures the test overall, this uses Pardon’s testcase generation framework and generates the initial set of cases for pardon to run.
export default { opening(({ defi, each })) { defi("env", each("stage", "prod"), "local"); }}
(defi
here is defaulting running tests with env
set to both stage
and prod
, but also allows local
)
Pardon’s test engine is built to flow data as naturally as possible: Tests accept a single argument, the “environment”, which is used/modified by executing flows of requests.
The syntax for unit and flow files is basically the same as the collection endpoint and mixin https files, but initial yaml schema and the overall semantics are different.
context: - env - port?>>>https://example.com/ping
<<<200 OK
pong
Unlike an endpoint or mixin, the requests and responses here are not alternatives for making a single request, but rather a list of requests (and immediately following response templates) which can control flow.
So this flow specifies the following steps.
const { env, port } = environment;const { ingress } = pardon({ env, port })`https://example.com/ping`();... match ingress against the responses and assert at least one matches ...
Additionally we can name responses…
context: - env - port?>>>https://example.com/ping
<<< pong200 OK
pong
<<< other200 OK
{{other}}
import { cases, trial } from 'pardon/testing';import assert from 'node:assert';
trial("%env/ping", () => { const { outcome, other } = await execute("ping.flow");
assert.equal(outcome, "pong");});
Naming requests and flow control
Section titled “Naming requests and flow control”We can name requests, too. If there’s a request name that matches an outcome name we run that request next. An unnamed outcome goes to the “next” request in the list. We can also specify retry limits and delays along with requests and responses, respectively.
As an example, some services expose async resources that we need to poll for when they become ready.
>>>POST https://example.com/create-long-running-task
<<< +10s200 OK
{ task }
>>> check / 25GET https://example.com/task/{task}
<<< done200 OK
<<< check +5s202 pending
This sequence starts with a POST
request, captures the task
identifier in the response, and proceeds
to check the progress of the task in 5 second intervals after giving it a 10 second head-start.
The check request has a 25 attempt limit specified.
If the response labeled done
is matched, this ends the sequence because there’s no request with that label.
Using sequences in sequences
Section titled “Using sequences in sequences”Each sequence is run sequentially, but they can rely on any number of sequences to run before, which are run in parallel with each other.
Imagine we want to find pre-defined products named "pen"
and "pencil"
, we could
define a unit that searches the products list and returns the id as product
in the environment.
We could define a find-product.flow.https
sequence.
context: - env - port? - name>>>GET https://example.com/products?name={{name}}
<<<200 OK
[ { "id": product }]
(this search assumes that we return exactly one result)
We can use one flow in another.
context: - env - port?use: - flow: find-product.flow context: - env - port? - name = 'pencils' provides: - product: pencils
- flow: find-product.flow context: - env - port? - name = 'pens' provides: - product: pens>>>POST https://example.com/checkout
{ "cart": [ { "product": pens, "quantity": 2 }, { "product": pencils, "quantity": 3 } ]}
<<<200 OK
{ "id": "{{order}}"}
Pardon runs the two find-product.flow
calls in parallel.
This is just one option of many for connecting requests.
Parameterized Testcases
Section titled “Parameterized Testcases”Suppose we need to run tests for different products, we can
use cases
/ set
to apply some values to the environment
the test starts with.
cases(({ set }) => { set("env", "stage"); set("name", "pens");})
environments = [{ env: "stage", name: "pens"}]
This updates the initially single initial test environment.
To run 3 cases, instead of defining three separete tests, we can use each
to generate three environments instead: The each
operator here forks
the current environment, defining an environment-per-product (multiplying the
number of trials defined.)
cases(({ set }) => { set("env", "stage");
each( set("name", "pencils"), set("name", "pens"), set("name", "markers"), );});
environments = [{ env: "stage", name: "pencils"}, { env: "stage", name: "pens"}, { env: "stage", anme: "markers"}]
If we would like to configure production tests as well,
we can use run the tests with stage
and prod
each,
(this expands our 3 tests into 6, with a single change)!
cases(({ set }) => { each( set("env", "stage"), set("env", "prod"), );
each( set("name", "pencils"), set("name", "pens"), set("name", "markers"), );})
environments = [{ env: "stage", name: "pencils"}, { env: "stage", name: "pens"}, { env: "stage", name: "markers"}, { env: "prod", name: "pencils"}, { env: "prod", name: "pens"}, { env: "prod", name: "markers"}]
As you can see, the test cases generated are a “cartesian product” of the sets in
each each
.
We should explore this behavior interactively.
BTW, each
can be used outside multiple set()
calls,
or applied to values, and it’s easier to type here.
set("env", "stage");set("name", each("pencils", "pens", "markers"));
Try applying each to the env
value as well.
set("env", each("stage", "prod"));set("name", each("pencils", "pens", "markers"));
We can also use object syntax for the set
call, to
set multiple key-value pairs together.
set({ env: each("stage", "prod"), name: each("pencils", "pens", "markers"),});
Feel free to use this to explore.
TMTOWTDI
Section titled “TMTOWTDI”(there’s more than one way to do it)
The testcase methods can be called multiple ways, to support a broad range of expressions.
For instance, each
and set
have multiple ways they can be called:
Set can be called with an object to assign multiple values at once.
set("a", "b");set("c", "d");
set({ a: "b", c: "d"})
We can also pass lambdas to each
, evaluating the environment across different
paths.
each( () => { set("a", "b"); set("c", "d"); }, () => { set("a", "x"); set("c", "y"); },);
each( set({ a: "b", c: "d" }), set({ a: "x", c: "y" }),);
We can use each()
as a “value” passed to set
, also
so these are also equivalent
each( set("a", "b"), set("a", "x"),);
set("a", each("b", "x"));
However to get the combination of values we had before using a single set
we
need to introduce robin
, which evaluates once but in a round-robin fashion.
each( set({ a: "b", c: "d" }), set({ a: "x", c: "y" }),);
set({ a: each("b", "x"), c: robin("d", "y"),});
In contrast, two each
statements will produce 4 results:
set({ a: each("b", "x"), c: each("d", "y"),});
each( set({ a: "b", c: "d" }), set({ a: "b", c: "y" }), set({ a: "x", c: "d" }), set({ a: "x", c: "y" }));
The full set of test case helpers are
set
- applies values to the environment.def
- applies defaults to the environment.defi
- applies default to a single value and filters unknown values.unset
- removes values from the environement.each(...)
- forks the environment by being different values or via different actions.repeat(n)
- clones the environment into n copies.robin
- behaves differently each successive evaluation (as a value or as an action).fi
-fi
lter orif
(backwards), can introduce conditionals or select environments.stop(...)
- essentiallyfi(...).then(if(false))
fun
- defines afun
ction, can be called withexe
. Functions can be used as actions or values.exe
-exe
cutes afun
ction.counter
- creates a counter that increments each evaluation.format
- builds a format string using the current environment.unique
- filters out duplicate environments.local
- creates a local context:local(...).export(...)
evaluates actions inexport
without producing the values fromlocal
.skip
- discards the first n environments.take
- takes n environments (discards the rest).smoke
- skips environments that are semi-redundant according to some criteriashuffle
- shuffles the order of the testcases, usually used with smoke.sort
- sorts environmentsdebug
- prints the “current” environment(s).
Configuring Trials
Section titled “Configuring Trials”A trial is registered with a parameterized name. (The name is used as an output
path for the test report.). A test file might be structured as a cases
+ trial
.
import { cases, trial } from 'pardon/testing';
cases(({ set }) => { set({ env: "stage", name: "pencils", });})
trial("%env/get-product-%name", ({ env, name }) => { /* ... */});
which declares the single trial stage/get-product-pencils
,
with the environment { env: "stage", name: "pencils" }
.
Let’s experiment with this structure to get a feel for it.
Here we can see the environments which would be passed into each named trial.
cases(({ set, each }) => { set({ env: each("stage", "prod"), name: "pencils" });});
trial("%env/get-product-%name", ({ env, name }) => { /* ... */});
Again, we expand the cases per environment.
cases(({ set, each }) => { set({ env: each("stage", "prod"), product: each("pencils", "pens", "markers") });});
trial("%env/get-product-%name", ({ env, name }) => { /* ... */});
Conditional Configuration
Section titled “Conditional Configuration”We can omit particular configurations with fi()
(read as either filter, or backwards if
) and stop()
.
Starting with our 6 cases, we can add stop({ env: "prod", name: "pens" })
to remove
the production test cases involving pens
.
We’re not selling product pens
until testing is complete in stage
,
so let’s add a stop
command which discards environments which match
the values provided.
Observe that the prod
+ pens
testcase is no longer shown.
cases(({ set, each, stop }) => { set({ env: each("stage", "prod"), name: each("pencils", "pens", "markers") });
stop({ env: 'prod', name: "pens" });});
trial("%env/get-product-%name", ({ env, name }) => { /* ... */});
Trials
Section titled “Trials”Our service testing is not likely going to be so uniform that we want a single
cases
configuration for the entire test run.
Often we’ll have our environment config in one place (actually we can move this to a common config location), and we can define our trials in gamuts.
cases(({ set, each }) => { set("env", each("stage", "prod"));});
survey(() => { cases(({ set, each }) => { set("name", each("pencils", "pens")); });
trial("%env/get-product-%name", () => { /*...*/ });});
This gives us tools to organize the case configs hierarically.
There are many utilities defining transforms of the test case(s):
fi(...).then().else()
andstop
for conditionals and termination.def
anddefi
to default and default-filter values.counter
and round-robin
for producing varying data.smoke
for covering a smaller a subset of cases, often preceeded withshuffle
to remove symmetries.uniq
for removing duplicate cases.local().export()
for evaluating with temporary values.fun
andexe
for defining functions/values.format
for formatting identifying values.- etc…
Sometimes we have a large number of tests defined and we don’t want to burden the system by running all of them all at once all the time.
We can cut down a large test into a smaller “smoke test” by specifying which things we want to cover. Pardon can pseudo-randomized filter down to a covering set of tests.
Smoke tests are specified with a “key-list”, an optional “per-list”, and an optional “shuffle” seed.
The key list looks like one of these… (comma-separated optional max count)
a,b:2,c
meaning we’re smoking a maximum count of{a:1, b:2, c:1}
, or7,a,b,c:2
meaning a maximum count of{a:7, b:7, c:2}
with a default of 7.
We’re counting the number of times we’ve seen the value of a
, b
, or c
so far,
and smoke skips tests when we’ve exeeded all of them.
The optional “per list” looks like
%x,y,z
meaning the count is segregated per values ofx,y,
andz
.
And the shuffle is just ~2
or some other number, with a value of ~0
for not applying the shuffle.
Understandably, this is best understood by experience:
If no %per
specification is present, the default is per-trial, you can also
use %trial
to be ex.
Try setting the smoke specification to…
name
- at least one test pername
,name%trial
- at least 1 tests for eachname
, counted perenv
(note how this distributes better than2,name,env
above).