019d05f64c
That's my mega-commit you've all been waiting for. The code for the shell share more routines with userspace apps than with kernel units, because, well, its behavior is that of a userspace app, not a device driver. This created a weird situation with libraries and jump tables. Some routine belonging to the `kernel/` directory felt weird there. And then comes `apps/basic`, which will likely share even more code with the shell. I was seeing myself creating huge jump tables to reuse code from the shell. It didn't feel right. Moreover, we'll probably want basic-like apps to optionnally replace the shell. So here I am with this huge change in the project structure. I didn't test all recipes on hardware yet, I will do later. I might have broken some... But now, the structure feels better and the line between what belongs to `kernel` and what belongs to `apps` feels clearer. |
||
---|---|---|
.. | ||
unit | ||
zasm | ||
Makefile | ||
README.md |
Testing Collapse OS
This folder contains Collapse OS' automated testing suite. To run, it needs
tools/emul
to be built. You can run all tests with make
.
zasm
This folder tests zasm's assembling capabilities by assembling test source files and compare the results with expected binaries. These binaries used to be tested with a golden standard assembler, scas, but at some point compatibility with scas was broken, so we test against previously generated binaries, making those tests essentially regression tests.
Those reference binaries sometimes change, especially when we update code in core libraries because some tests include them. In this case, we have to update binaries to the new expected value by being extra careful not to introduce a regression in test references.
unit
Those tests target specific routines to test and test them using
tools/emul/runbin
which:
- Loads the specified binary
- Runs it until it halts
- Verifies that
A
is zero. If it's not, we're in error and we display the value ofA
.
Test source code has no harnessing and is written in a very "hands on" approach. At the moment, debugging a test failure is a bit tricky because the error code often doesn't tell us much.
The convention is to keep a testNum
counter variable around and call
nexttest
after each success so that we can easily have an idea of where we
fail.
Then, if you need to debug the cause of a failure, well, you're on your own. However, there are tricks.
- Run
unit/runtests.sh <name of file to test>
to target a specific test unit. - Insert a
halt
to see the value ofA
at any given moment: it will be your reported error code (if 0, runbin will report a success).