Skip to content

Pardon Testcases

Tests

Tests build on top of collections and use additional “sequence” https files. These are a similar format but used as scripts for running a sequence of request/responses, (rather than as templates for a single request response.)

Pardon’s test system executes trials. Trials can be organized in gamuts (similar to how it("should", ...) or test(...) tests from other frameworks can be organized with describe blocks).

To illustrate the concept, let’s consdier a test setup for confirming ping works. In addition to our collection, we need an overall script for configuring the test system, a test, and a sequence to execute.

  • tests/service.test.ts
  • sequences/ping.flow.https
  • pardon.test.ts
  • pardonrc.yaml
  • collection/…

These files

  • define the trials/sequences to run, and
  • configure the test environment

The trials have parameterized names. The name specifies where the output of each trial execution should go, they have to be unique.

Here we’re using %env/ping so we can run this trial in multiple envs at once. Also, the report output will go into directories according to the environment they were run in.

example.test.ts
import { trial, executeFlow } from 'pardon/testing'
trial("%env/ping", () => execute("ping.flow"));

Sequences: Units and Flows

Pardon’s test engine is built to flow data as naturally as possible: Tests accept a single argument, the “environment”, which is used/modified by executing sequences of requests.

Units and flows are both seqeunces, pardon considers units as idempotent.

The syntax for unit and flow files is basically the same as the collection endpoint and mixin https files, but initial yaml schema and the overall semantics are different.

sequences/ping.flow.https
context:
- env
- port?
>>>
https://example.com/ping
<<<
200 OK
pong

Unlike an endpoint or mixin, the requests and responses here are not “alternatives for making a single request”, but rather a list of requests to make (with control flow and retry limits included).

So this flow specifies the following steps.

const { env, port } = environment;
const { inbound } = pardon({ env, port })`https://example.com/ping`();
... match inbound against the responses and assert at least one matches ...

Additionally we can name responses…

sequences/ping.flow.https
context:
- env
- port?
>>>
https://example.com/ping
<<< pong
200 OK
pong
<<< other
200 OK
{{other}}

Naming requests to script sequences

We can name requests, too. If there’s a request name that matches an outcome name we run that request next. An unnamed outcome goes to the “next” request in the list. We can also specify retry limits and delays along with requests and responses, respectively.

As an example, some services expose async resources that we need to poll for when they become ready. This example specifies two requests and their respective response matching behavior

>>>
POST ...
<<< success
200 OK
{ status: "done" }
<<< waiting +10s
{ status: "processing" }
>>> waiting / 10
GET ...
<<< success
200 OK
{ status: "done" }
<<< waiting +5s
200 OK
{ status: "processing" }

This sequence starts with a POST request, matching the response status to done or processing, in the case of done the outcome is success and as there’s no request named success we are done.

If the outcome is waiting, we wait 10 seconds, and then proceed to run the GET request: again we match the status of the result, waiting 5 more seconds for each processing response.

Because the GET request is named waiting / 10, we would poll the resource at most 10 times before giving up.

Using sequences in sequences

Each sequence is run sequentially, but they can rely on any number of sequences to run before, and these are run in parallel.

Imagine we want to find pre-defined products named "pen" and "pencil", we could define a unit that searches the products list and returns the id as product in the environment.

We could define a find-product.unit.https sequence. Making it a unit means that the result (given the inputs) would be cached/reused across tests in the test run.

sequences/find-product.unit.https
context:
- env
- port?
- name
>>>
GET https://example.com/products?name={{name}}
<<<
200 OK
mux([
"id": "{{product}}""
])

Using this sequence we can define a flow to order pens and pencils. First we would use the find-product.unit twice and map the response product id value, and then we can use those values.

sequences/order-pens-and-pencils.flow.https
context:
- env
- port?
use:
- sequence: find-product.unit
context:
- env
- port?
- name = 'pencils'
provides:
- product: pencils
- sequence: find-product.unit
context:
- env
- port?
- name = 'pens'
provides:
- product: pens
>>>
POST https://example.com/orders
{
"cart": [
{ "product": "{{pens}}", "quantity": 2 },
{ "product": "{{pencils}}", "quantity": 3 }
]
}
<<<
200 OK
{
"id": "{{order}}"
}

The pardon test runner would execute the find-product.unit calls in parallel.

Parameterized Testcases

Suppose we need to run tests for different products, we can use cases / set to apply some values to the environment the test starts with.

cases(({ set }) => {
set("env", "stage");
set("name", "pens");
})
environments = [{
env: "stage",
name: "pens"
}]

This updates the initially single initial test environment.

To run 3 cases, instead of defining three separete tests, we can use each to generate three environments instead: The each operator here forks the current environment, defining an environment-per-product (multiplying the number of trials defined.)

cases(({ set }) => {
set("env", "stage");
each(
set("name", "pencils"),
set("name", "pens"),
set("name", "markers"),
);
});
environments = [{
env: "stage",
name: "pencils"
}, {
env: "stage",
name: "pens"
}, {
env: "stage",
anme: "markers"
}]

If we would like to configure production tests as well, we can use run the tests with stage and prod each, (this expands our 3 tests into 6, with a single change)!

cases(({ set }) => {
each(
set("env", "stage"),
set("env", "prod"),
);
each(
set("name", "pencils"),
set("name", "pens"),
set("name", "markers"),
);
})
environments = [{
env: "stage",
name: "pencils"
}, {
env: "stage",
name: "pens"
}, {
env: "stage",
name: "markers"
}, {
env: "prod",
name: "pencils"
}, {
env: "prod",
name: "pens"
}, {
env: "prod",
name: "markers"
}]

As you can see, the test cases generated are a “cartesian product” of the sets in each each.

We should explore this behavior interactively.

Exercises
configure three products

BTW, each can be used outside multiple set() calls, or applied to values, and it’s easier to type here.

set("env", "stage");
set("name", each("pencils", "pens", "markers"));
Loading Pardon Playground...

Feel free to use this to explore.

TMTOWTDI

(there’s more than one way to do it)

The testcase methods can be called multiple ways, to support a broad range of expressions.

For instance, each and set have multiple ways they can be called:

Set can be called with an object to assign multiple values at once.

set("a", "b");
set("c", "d");
set({
a: "b",
c: "d"
})

We can also pass lambdas to each, evaluating the environment across different paths.

each(
() => {
set("a", "b");
set("c", "d");
},
() => {
set("a", "x");
set("c", "y");
},
);
each(
set({ a: "b", c: "d" }),
set({ a: "x", c: "y" }),
);

We can use each() as a “value” passed to set, also so these are also equivalent

each(
set("a", "b"),
set("a", "x"),
);
set("a", each("b", "x"));

However to get the combination of values we had before using a single set we need to introduce robin, which evaluates once but in a round-robin fashion.

each(
set({ a: "b", c: "d" }),
set({ a: "x", c: "y" }),
);
set({
a: each("b", "x"),
c: robin("d", "y"),
});

In contrast, two each statements will produce 4 results:

set({
a: each("b", "x"),
c: each("d", "y"),
});
each(
set({ a: "b", c: "d" }),
set({ a: "b", c: "y" }),
set({ a: "x", c: "d" }),
set({ a: "x", c: "y" })
);

The full set of test case helpers are

  • set - applies values to the environment.
  • def - applies defaults to the environment.
  • defi - applies default to a single value and filters unknown values.
  • unset - removes values from the environement.
  • each(...) - forks the environment by being different values or via different actions.
  • repeat(n) - clones the environment into n copies.
  • robin - behaves differently each successive evaluation (as a value or as an action).
  • fi - filter or if (backwards), can introduce conditionals or select environments.
  • stop(...) - essentially fi(...).then(if(false))
  • fun - defines a function, can be called with exe. Functions can be used as actions or values.
  • exe - executes a function.
  • counter - creates a counter that increments each evaluation.
  • format - builds a format string using the current environment.
  • unique - filters out duplicate environments.
  • local - creates a local context: local(...).export(...) evaluates actions in export without producing the values from local.
  • skip - discards the first n environments.
  • take - takes n environments (discards the rest).
  • smoke - skips environments that are semi-redundant according to some criteria
  • shuffle - shuffles the order of the testcases, usually used with smoke.
  • sort - sorts environments
  • debug - prints the “current” environment(s).

Configuring Trials

A trial is registered with a parameterized name. (The name is used as an output path for the test report.). A test file might be structured as a cases + trial.

products.test.ts
import { cases, trial } from 'pardon/testing';
cases(({ set }) => {
set({
env: "stage",
name: "pencils",
});
})
trial("%env/get-product-%name", ({ env, name }) => {
/* ... */
});

which declares the single trial stage/get-product-pencils, with the environment { env: "stage", name: "pencils" }.

Let’s experiment with this structure to get a feel for it.

Exercises
expand the cases by env

Here we can see the environments which would be passed into each named trial.

cases(({ set, each }) => {
set({
env: each("stage", "prod"),
name: "pencils"
});
});
trial("%env/get-product-%name", ({ env, name }) => {
/* ... */
});
Loading Pardon Playground...

Conditional Configuration

We can omit particular configurations with fi() (read as either filter, or backwards if) and stop().

Starting with our 6 cases, we can add stop({ env: "prod", name: "pens" }) to remove the production test cases involving pens.

Exercises
exclude an unsupported products

We’re not selling product pens until testing is complete in stage, so let’s add a stop command which discards environments which match the values provided.

Observe that the prod + pens testcase is no longer shown.

cases(({ set, each, stop }) => {
set({
env: each("stage", "prod"),
name: each("pencils", "pens", "markers")
});
stop({ env: 'prod', name: "pens" });
});
trial("%env/get-product-%name", ({ env, name }) => {
/* ... */
});
Loading Pardon Playground...

Trials

Our service testing is not likely going to be so uniform that we want a single cases configuration for the entire test run.

Often we’ll have our environment config in one place (actually we can move this to a common config location), and we can define our trials in gamuts.

products-and-categories.test.ts
cases(({ set, each }) => {
set("env", each("stage", "prod"));
});
gamut(() => {
cases(({ set, each }) => {
set("name", each("pencils", "pens"));
});
trial("%env/get-product-%name", () => {
/*...*/
});
});

This gives us tools to organize the case configs hierarically.

There are many utilities defining transforms of the test case(s):

  • fi(...).then().else() and stop for conditionals and termination.
  • def and defi to default and default-filter values.
  • counter and round-robin for producing varying data.
  • smoke for covering a smaller a subset of cases, often preceeded with shuffle to remove symmetries.
  • uniq for removing duplicate cases.
  • local().export() for evaluating with temporary values.
  • fun and exe for defining functions/values.
  • format for formatting identifying values.
  • etc…
Loading Pardon Playground...

Sometimes we have a large number of tests defined and we don’t want to burden the system by running all of them all at once all the time.

We can cut down a large test into a smaller “smoke test” by specifying which things we want to cover. Pardon can pseudo-randomized filter down to a covering set of tests.

Smoke tests are specified with a “key-list”, an optional “per-list”, and an optional “shuffle” seed.

The key list looks like one of these… (comma-separated optional max count)

  • a,b:2,c meaning we’re smoking a maximum count of {a:1, b:2, c:1}, or
  • 7,a,b,c:2 meaning a maximum count of {a:7, b:7, c:2} with a default of 7.

We’re counting the number of times we’ve seen the value of a, b, or c so far, and smoke skips tests when we’ve exeeded all of them.

The optional “per list” looks like

  • %x,y,z meaning the count is segregated per values of x,y, and z.

And the shuffle is just ~2 or some other number, with a value of ~0 for not applying the shuffle.

Understandably, this is best understood by experience:

Exercises
experiment with smoke tests
Loading Pardon Playground...

If no %per specification is present, the default is per-trial, you can also use %trial to be ex.

Exercises
experiment with smoke tests

Try setting the smoke specification to…

  • name - at least one test per name,
  • name%trial - at least 1 tests for each name, counted per env (note how this distributes better than 2,name,env above).
Loading Pardon Playground...