more minor edits to README.md

I think I have a problem.  Someone stop me.
This commit is contained in:
jlamothe 2023-11-23 01:04:36 +00:00
parent 2054ab3096
commit daf189dc8d

100
README.md
View File

@ -40,11 +40,11 @@ argument.
## `run_tests()`
This will typically be the first function you call. It sets up the
testing framework, creates an initial `TestState` value, runs the
tests provided to it, and displays a log and summary at the end. If
any of the provided tests fail, it will cause the test program to exit
with a status of `"test(s) failed"`. Its prototype follows:
This will typically be the first function called. It creates an
initial `TestState` value, runs the tests, and displays a test log and
summary at the end. If any of the tests fail, it will cause the test
process to exit with a status of `"test(s) failed"`. Its prototype
follows:
```C
void run_tests(void (*)(TestState *));
@ -56,13 +56,13 @@ created `TestState` value will be passed to this function.
## Simple Tests
The simplest form of test can be represented as a function resembling
The simplest form of test can be represented by a function resembling
the follwoing:
```C
TestResult my_test(TestState *s)
{
// ...
// test code goes here...
}
```
@ -74,7 +74,7 @@ unsurprisingly) the result of the test. The options are as follows:
- `test_pending`: the test is pending, and should be ignored for now
Tests of this type can be run by passing a pointer to them to the
`run_test()` function, which has the following prototype:
`run_test()` function which has the following prototype:
```C
void run_test(
@ -83,9 +83,9 @@ void run_test(
)
```
This function will run the provided test and update the state to
reflect the result of the test. Thus, the above hypothetical test
could by run as follows:
This function will call the provided test function, and update the
provided `TestState` to reflect the result. Thus, the above
hypothetical test could by run as follows:
```C
void
@ -103,24 +103,26 @@ main()
```
Passing a null `TestState` pointer will cause nothing to happen. This
is true of all functions in this library. Passing a null function
pointer to `run_test()` will be interpreted as a pending test.
is true of all functions in this library. (This behaviour might be
reconsidered later, so don't count on it.) Passing a null function
pointer to `run_test()` is interpreted as a pending test.
## Passing Values to Tests
Since C supports neither lambdas nor closures, this leaves one with
little choice but to come up with a unique name for each test
function. This, while possible, would definitely be rather
inconvenient. To combat this, it is helpful to be able to pass data
into a generic test so that it can be reused multiple times.
Since C supports neither lambdas nor closures, this would leave one
with little choice but to come up with a unique name for each
individual test function. This, while possible, would definitely be
rather inconvenient. To combat this shortcoming, it is helpful to be
able to pass data into a generic test function so that it can be
reused multiple times.
### The `ptr` Value
The `TestState` struct has a value called `ptr` which is a `void`
pointer that can be set prior to calling `run_test()` (or any other
function, really). This value can then be read by the test function,
giving you the ability to essentially pass in *any* type of data you
may need. While not ideal, it's *a* solution.
function, really). This value can then be referenced by the test
function, giving you the ability to essentially pass in (or out) *any*
type of data you may need. While not ideal, it's *a* solution.
The library does not perform any kind of validation or automatic
memory management on the `ptr` value (this is C after all), so the
@ -129,13 +131,13 @@ tests.
### Convenience Functions
As the tests become more and more complex, managing a single `ptr`
value can become increasingly burdensome. For this reason, there are
a few convenience functions that provide an alternate mechanism of
passing data into a function, without altering the `ptr` value. (They
actually do internally, but they restore the original value before
passing the state on.) Two such functions are `run_test_with()` and
`run_test_compare()`.
As the test suite becomes more and more complex, managing a single
`ptr` value can become increasingly burdensome. For this reason,
there are a few convenience functions that provide an alternate
mechanism for passing data into a function without altering the `ptr`
value. (They actually do alter it internally, but they restore the
original value before passing the state on.) Two such functions are:
`run_test_with()`, and `run_test_compare()`.
`run_test_with()` has the following prototype:
@ -154,8 +156,8 @@ the third argument is the pointer that gets passed into the test
function.
`run_test_compare()` is similar, but it allows *two* pointers to be
passed into the test. This is useful for comparing the actual output
of a function to an expected value, for instance.
passed into the test function. This is useful for comparing the
actual output of a function to an expected value, for instance.
The prototype for `run_test_compare()` follows:
@ -168,18 +170,20 @@ void run_test_compare(
);
```
The pointers will be passed into the test function in the same order
they are passed into `run_test_compare()`.
## Test Contexts
It is useful to document what your tests are doing. This can be
achieved using contexts. Contexts are essentially labelled
collections of related tests. Contexts can be nested into
collections of related tests. Contexts can be nested to create
hierarchies. This is useful both for organization purposes as well as
creating reusable test code. There are several functions written for
managing these contexts. Each of these functions takes as its first
two arguments: a pointer to the current `TestState`, and a pointer to
a pointer to a string describing the context it defines. If the
pointer to the string is null, the tests are run as a part of the
existing context.
a string describing the context it defines. If the pointer to the
string is null, the tests are run as a part of the existing context.
### `test_context()`
@ -193,7 +197,9 @@ void test_context(
This function takes a pointer to the current `TestState`, a string
describing the context, and a function pointer that is used the same
way as the pointer passed to `run_tests()`.
way as the one passed to `run_tests()`. This function will be called
and its tests will be run within the newly defined context. Nothing
prevents this function from being called again in a different context.
### `test_context_with()`
@ -206,11 +212,11 @@ void test_context_with(
);
```
This funciton allows for the passing of a `void` pointer into the test
function in much the same way as the `run_test_with()` function. Its
arguments are (in order), a pointer to the current state, the context
description, a pointer to the test function, and the pointer being
passed into that function.
This funciton works similarly to `test_context()`, but allows for the
passing of a `void` pointer into the test function in much the same
way as the `run_test_with()` function. Its arguments are (in order),
a pointer to the current state, the context description, a pointer to
the test function, and the pointer to be passed into that function.
### `test_context_compare()`
@ -238,8 +244,8 @@ void single_test_context(
```
This function applies the context label to a *single* test. The
function passed in is expected to operate in the same way as a
function passed to `run_test()`.
function passed in is expected to operate in the same way as the one
passed to `run_test()`.
### `single_test_context_with()`
@ -272,12 +278,12 @@ I assume you get the idea at this point.
## Logging
When `run_tests()` finishes running the tests, it displays a log and
summary. The summary is simply a count of the number of tests run,
summary. The summary is simply a tally of the number of tests run,
passed, failed, and pending. While this is useful (and probably all
you need to know when all the tests pass) you probably want more
detail when something goes wrong. To facilitate this, tests can
append to the test log, which is automatically displayed just before
the summary. There are two functions for doing this.
you need to know when all the tests pass) it is likely desirable to
have more detail when something goes wrong. To facilitate this, tests
can append to the test log, which is automatically displayed just
before the summary. There are two functions for doing this.
### `append_test_log()`