1
0
mirror of https://github.com/hsoft/collapseos.git synced 2024-11-24 03:08:06 +11:00
collapseos/tools/tests
Virgil Dupras a034f63e23 test: begin adding common test harnessing code
This should make tests a bit more convenient to write and debug.

Moreover, begin de de-IX-ization of parseExpr. I have, in a local WIP, a
parseExpr implemented using a recursive descent algo, it passes all tests, but
it unfortunately assembles a faulty zasm. I have to find the expressions that
it doesn't parse properly.

But before I do that, I prefer to commit these significant improvements I've
been making to tests harness in parallel of this development.
2019-12-23 15:41:25 -05:00
..
avra avra: add LD/ST 2019-12-22 21:50:20 -05:00
shell Include tools/tests/shell/test.cfs in repo 2019-12-12 14:49:09 -05:00
unit test: begin adding common test harnessing code 2019-12-23 15:41:25 -05:00
zasm Make makefiles and shell scripts portable 2019-12-09 09:45:22 -05:00
Makefile avra: first steps 2019-12-13 17:38:40 -05:00
README.md tools/tests: add missing doc about shell tests 2019-12-12 16:31:52 -05:00

Testing Collapse OS

This folder contains Collapse OS' automated testing suite. To run, it needs tools/emul to be built. You can run all tests with make.

zasm

This folder tests zasm's assembling capabilities by assembling test source files and compare the results with expected binaries. These binaries used to be tested with a golden standard assembler, scas, but at some point compatibility with scas was broken, so we test against previously generated binaries, making those tests essentially regression tests.

Those reference binaries sometimes change, especially when we update code in core libraries because some tests include them. In this case, we have to update binaries to the new expected value by being extra careful not to introduce a regression in test references.

unit

Those tests target specific routines to test and test them using tools/emul/runbin which:

  1. Loads the specified binary
  2. Runs it until it halts
  3. Verifies that A is zero. If it's not, we're in error and we display the value of A.

Test source code has no harnessing and is written in a very "hands on" approach. At the moment, debugging a test failure is a bit tricky because the error code often doesn't tell us much.

The convention is to keep a testNum counter variable around and call nexttest after each success so that we can easily have an idea of where we fail.

Then, if you need to debug the cause of a failure, well, you're on your own. However, there are tricks.

  1. Run unit/runtests.sh <name of file to test> to target a specific test unit.
  2. Insert a halt to see the value of A at any given moment: it will be your reported error code (if 0, runbin will report a success).

shell

Those tests are in the form of shell "replay" files. Every ".replay" file in this folder contains the contents to type in the shell. That contents is piped through the shell and the output is then compared with the corresponding ".expected" file. If they match exactly, the test passes.