The goal was to be able to implement "(" in forth, but I realised that my
INTERPRET approach was wrong. Compiling the line beforehand is, after all,
not good. I'll have to change it again.
* String functions optimised
A few functions have been tweaked, but the biggest changes are in strlen, strskip and toWS, which take around two third of the cycles they used to (although strskip has more overhead). 10 bytes saved total.
toWS had two bytes added inlining the isWS call, and a jump to unsetZ was inlined too, saving a byte. This saved 29 cycles, with the original function being 90 cycles. I looked at other uses of isWS and it's difficult to inline it effectively in every situation, so I haven't inlined it elsewhere.
rdWS had a byte and two cycles saved by inlining a jump to unsetZ.
strskip is the same size, with the loop cut down from 35 cycles to 21 cycles, but 18 cycles are added outside the loop. I expect one character strings are in the minority, so this should save cycles overall.
strlen had 8 bytes saved, with the loop cut down from 38 cycles to 21 cycles, and 18 cycles removed outside the loop.
* Fixed strskip
Strskip wasn't preserving a properly. The new code uses the shadow af register, so whilst a byte and 4 cycles have been added outside the loop, it's safer and cleaner. The flags register isn't affected, but since the search goes for up to 64Kb I think it's safe to say the end of the string will always be reached.
* Remove inlining of isWS
I've tweaked nearly every function in this file, so I'll go through them one by one.
parseDecimal has been reworked a little so that `a` can be used instead of `b` for checking for overflow. I had originally intended to redo it to work like the old parseDecimal, but I think the current method (once reworked a little) is cleaner and smaller, and should be just as fast. 7 bytes and 27 cycles saved.
parseHexadecimal has been changed to load hex digits into `b` `d` `c` `e` from the right (so all the digits move along to the left so the new digit can be inserted on the right), and then only at the end is any shifting done, using the faster `add a, a` to do left shifts. 9 bytes saved and 78 cycles saved inside the loop, and then 49 cycles added after the loop.
parseBinaryLiteral had a few instructions moved around, saving two bytes and 5 cycles inside the loop, and a further 15 cycles saved on error.
parseLiteral has been reworked slightly, the isDigit call has been replaced with an inline parseDecimalDigit, saving a byte and around 20-30 cycles, with around 16 more cycles saved if the number is a decimal. The .char routine has been reduced by a byte, and 6 cycles saved on success, but 5 cycles added on error.
isDigit has been reduced by 4 bytes and 10 cycles on success, with a few more cycles saved on fail (hard to estimate due to branching).
Sub-parsers are seldom used by themselves, except for parseDecimal.
I'm tightening the code of this unit for two reasons:
1. Optimization
2. Upcoming API change where HL won't be preserved anymore, but will
point to char following the last parse char. This will allow us
to simplify lib/expr.
Instead of going left and right, finding operators chars and replacing them
with nulls, we parse expressions in a more orderly manner, one chunk at a
time. I think it qualifies as "recursive descent", but I'm not sure.
This allows us to preserve the string we parse and should also make the
implementation of parens much easier.
This should make tests a bit more convenient to write and debug.
Moreover, begin de de-IX-ization of parseExpr. I have, in a local WIP, a
parseExpr implemented using a recursive descent algo, it passes all tests, but
it unfortunately assembles a faulty zasm. I have to find the expressions that
it doesn't parse properly.
But before I do that, I prefer to commit these significant improvements I've
been making to tests harness in parallel of this development.
That's my mega-commit you've all been waiting for.
The code for the shell share more routines with userspace apps than with kernel
units, because, well, its behavior is that of a userspace app, not a device
driver.
This created a weird situation with libraries and jump tables. Some routine
belonging to the `kernel/` directory felt weird there.
And then comes `apps/basic`, which will likely share even more code with the
shell. I was seeing myself creating huge jump tables to reuse code from the
shell. It didn't feel right.
Moreover, we'll probably want basic-like apps to optionnally replace the shell.
So here I am with this huge change in the project structure. I didn't test all
recipes on hardware yet, I will do later. I might have broken some...
But now, the structure feels better and the line between what belongs to
`kernel` and what belongs to `apps` feels clearer.
* Optimised parsing functions and other minor optimisations
UnsetZ has been reduced by a byte, and between 17 and 28 cycles saved based on branching. Since branching is based on a being 0, it shouldn't have to branch very often and so be 28 cycles saved most the time. Including the initial call, the old version was 60 cycles, so this should be nearly twice as fast.
fmtHex has been reduced by 4 bytes and between 3 and 8 cycles based on branching.
fmtHexPair had a redundant "and" removed, saving two bytes and seven cycles.
parseHex has been reduced by 7 bytes. Due to so much branching, it's hard to say if it's faster, but it should be since it's fewer operations and now conditional returns are used which are a cycle faster than conditional jumps. I think there's more to improve here, but I haven't come up with anything yet.
* Major parsing optimisations
Totally reworked both parseDecimal and parseDecimalDigit
parseDecimalDigit no longer exists, as it could be replaced by an inline alternative in the 4 places it appeared. This saves one byte overall, as the inline version is 4 bytes, 1 byte more than a call, and removing the function saved 5 bytes. It has been reduced from between 52 and 35 cycles (35 on error, so we'd expect 52 cycles to be more common unless someone's really bad at programming) to 14 cycles, so 2-3 times faster.
parseDecimal has been reduced by a byte, and now the main loop is just about twice as fast, but with increased overhead. To put this into perspective, if we ignore error cases:
For decimals of length 1 it'll be 1.20x faster, for decimals of length 2, 1.41x faster, for length 3, 1.51x faster, for length 4, 1.57x faster, and for length 5 and above, at least 1.48x faster (even faster if there's leading zeroes or not the worst case scenario).
I believe there is still room for improvement, since the first iteration can be nearly replaced with "ld l, c" since 0*10=0, but when I tried this I could either add a zero check into the main loop, adding around 40 cycles and 10 bytes, or add 20 bytes to the overhead, and I don't think either of those options are worth it.
* Inlined parseDecimalDigit
See previous commit, and /lib/parse.asm, for details
* Fixed tabs and spacing
* Fixed tabs and spacing
* Better explanation and layout
* Corrected error in comments, and a new parseHex
5 bytes saved in parseHex, again hard to say what that does to speed, the shortest possible speed is probably a little slower but I think non-error cases should be around 9 cycles faster for decimal and 18 cycles faster for hex as there's now only two conditional returns and no compliment carries.
* Fixed the new parseHex
I accidentally did `add 0xe9` without specifying `a`
* Commented the use of daa
I made the comments surrounding my use of daa much clearer, so it isn't quite so mystical what's being done here.
* Removed skip leading zeroes, added skip first multiply
Now instead of skipping leading zeroes, the first digit is loaded directly into hl without first multiplying by 10. This means the first loop is skipped in the overhead, making the method 2-3 times faster overall, and is now faster for the more common fewer digit cases too. The number of bytes is exactly the same, and the inner loop is slightly faster too thanks to no longer needing to load a into c.
To be more precise about the speed increase over the current code, for decimals of length 1 it'll be 3.18x faster, for decimals of length 2, 2.50x faster, for length 3, 2.31x faster, for length 4, 2.22x faster, and for length 5 and above, at least 2.03x faster. In terms of cycles, this is around 100+(132*length) cycles saved per decimal.
* Fixed erroring out for all number >0x1999
I fixed the errors for numbers >0x1999, sadly it is now 6 bytes bigger, so 5 bytes larger than the original, but the speed increases should still hold.
* Fixed more errors, clearer choice of constants
* Clearer choice of constants
* Moved and indented comment about fmtHex's method
* Marked inlined parseDecimalDigit uses
* Renamed .error, removed trailing whitespace, more verbose comments.