collapseos/kernel/core.asm

106 lines
2.3 KiB
NASM
Raw Normal View History

; core
;
; Routines used pretty much all everywhere. Unlike all other kernel units,
; this unit is designed to be included directly by userspace apps, not accessed
; through jump tables. The reason for this is that jump tables are a little
; costly in terms of machine cycles and that these routines are not very costly
; in terms of binary space.
; Therefore, this unit has to stay small and tight because it's repeated both
; in the kernel and in userspace. It should also be exclusively for routines
; used in the kernel.
; add the value of A into DE
addDE:
2019-05-01 05:51:39 +10:00
push af
add a, e
jr nc, .end ; no carry? skip inc
inc d
.end:
ld e, a
2019-05-01 05:51:39 +10:00
pop af
2019-06-01 01:12:29 +10:00
noop: ; piggy backing on the first "ret" we have
ret
; add the value of A into HL
; affects carry flag according to the 16-bit addition, Z, S and P untouched.
addHL:
push de
ld d, 0
ld e, a
add hl, de
pop de
ret
; copy (HL) into DE, then exchange the two, utilising the optimised HL instructions.
; ld must be done little endian, so least significant byte first.
intoHL:
push de
ld e, (hl)
inc hl
ld d, (hl)
ex de, hl
pop de
ret
intoDE:
ex de, hl
call intoHL
ex de, hl ; de preserved by intoHL, so no push/pop needed
2019-04-24 03:29:16 +10:00
ret
intoIX:
push ix
ex (sp), hl ;swap hl with ix, on the stack
call intoHL
ex (sp), hl ;restore hl from stack
pop ix
ret
2019-06-01 01:12:29 +10:00
; Compare HL with DE and sets Z and C in the same way as a regular cp X where
; HL is A and DE is X.
cpHLDE:
push hl
or a ;reset carry flag
sbc hl, de ;There is no 'sub hl, de', so we must use sbc
pop hl
2019-07-24 12:59:10 +10:00
ret
2019-06-01 01:12:29 +10:00
; Write the contents of HL in (DE)
; de and hl are preserved, so no pushing/popping necessary
writeHLinDE:
ex de, hl
ld (hl), e
inc hl
ld (hl), d
dec hl
ex de, hl
ret
; Call the method (IX) is a pointer to. In other words, call intoIX before
; callIX
callIXI:
push ix
call intoIX
call callIX
pop ix
ret
2019-04-15 07:01:28 +10:00
; jump to the location pointed to by IX. This allows us to call IX instead of
2019-06-01 01:12:29 +10:00
; just jumping it. We use IX because we seldom use this for arguments.
2019-04-15 07:01:28 +10:00
callIX:
jp (ix)
callIY:
jp (iy)
; Ensures that Z is unset (more complicated than it sounds...)
; There are often better inline alternatives, either replacing rets with
; appropriate jmps, or if an 8 bit register is known to not be 0, an inc
Decimal parse optimisations (#45) * Optimised parsing functions and other minor optimisations UnsetZ has been reduced by a byte, and between 17 and 28 cycles saved based on branching. Since branching is based on a being 0, it shouldn't have to branch very often and so be 28 cycles saved most the time. Including the initial call, the old version was 60 cycles, so this should be nearly twice as fast. fmtHex has been reduced by 4 bytes and between 3 and 8 cycles based on branching. fmtHexPair had a redundant "and" removed, saving two bytes and seven cycles. parseHex has been reduced by 7 bytes. Due to so much branching, it's hard to say if it's faster, but it should be since it's fewer operations and now conditional returns are used which are a cycle faster than conditional jumps. I think there's more to improve here, but I haven't come up with anything yet. * Major parsing optimisations Totally reworked both parseDecimal and parseDecimalDigit parseDecimalDigit no longer exists, as it could be replaced by an inline alternative in the 4 places it appeared. This saves one byte overall, as the inline version is 4 bytes, 1 byte more than a call, and removing the function saved 5 bytes. It has been reduced from between 52 and 35 cycles (35 on error, so we'd expect 52 cycles to be more common unless someone's really bad at programming) to 14 cycles, so 2-3 times faster. parseDecimal has been reduced by a byte, and now the main loop is just about twice as fast, but with increased overhead. To put this into perspective, if we ignore error cases: For decimals of length 1 it'll be 1.20x faster, for decimals of length 2, 1.41x faster, for length 3, 1.51x faster, for length 4, 1.57x faster, and for length 5 and above, at least 1.48x faster (even faster if there's leading zeroes or not the worst case scenario). I believe there is still room for improvement, since the first iteration can be nearly replaced with "ld l, c" since 0*10=0, but when I tried this I could either add a zero check into the main loop, adding around 40 cycles and 10 bytes, or add 20 bytes to the overhead, and I don't think either of those options are worth it. * Inlined parseDecimalDigit See previous commit, and /lib/parse.asm, for details * Fixed tabs and spacing * Fixed tabs and spacing * Better explanation and layout * Corrected error in comments, and a new parseHex 5 bytes saved in parseHex, again hard to say what that does to speed, the shortest possible speed is probably a little slower but I think non-error cases should be around 9 cycles faster for decimal and 18 cycles faster for hex as there's now only two conditional returns and no compliment carries. * Fixed the new parseHex I accidentally did `add 0xe9` without specifying `a` * Commented the use of daa I made the comments surrounding my use of daa much clearer, so it isn't quite so mystical what's being done here. * Removed skip leading zeroes, added skip first multiply Now instead of skipping leading zeroes, the first digit is loaded directly into hl without first multiplying by 10. This means the first loop is skipped in the overhead, making the method 2-3 times faster overall, and is now faster for the more common fewer digit cases too. The number of bytes is exactly the same, and the inner loop is slightly faster too thanks to no longer needing to load a into c. To be more precise about the speed increase over the current code, for decimals of length 1 it'll be 3.18x faster, for decimals of length 2, 2.50x faster, for length 3, 2.31x faster, for length 4, 2.22x faster, and for length 5 and above, at least 2.03x faster. In terms of cycles, this is around 100+(132*length) cycles saved per decimal. * Fixed erroring out for all number >0x1999 I fixed the errors for numbers >0x1999, sadly it is now 6 bytes bigger, so 5 bytes larger than the original, but the speed increases should still hold. * Fixed more errors, clearer choice of constants * Clearer choice of constants * Moved and indented comment about fmtHex's method * Marked inlined parseDecimalDigit uses * Renamed .error, removed trailing whitespace, more verbose comments.
2019-10-24 22:58:32 +11:00
; then a dec. If a is nonzero, 'or a' is optimal.
unsetZ:
Decimal parse optimisations (#45) * Optimised parsing functions and other minor optimisations UnsetZ has been reduced by a byte, and between 17 and 28 cycles saved based on branching. Since branching is based on a being 0, it shouldn't have to branch very often and so be 28 cycles saved most the time. Including the initial call, the old version was 60 cycles, so this should be nearly twice as fast. fmtHex has been reduced by 4 bytes and between 3 and 8 cycles based on branching. fmtHexPair had a redundant "and" removed, saving two bytes and seven cycles. parseHex has been reduced by 7 bytes. Due to so much branching, it's hard to say if it's faster, but it should be since it's fewer operations and now conditional returns are used which are a cycle faster than conditional jumps. I think there's more to improve here, but I haven't come up with anything yet. * Major parsing optimisations Totally reworked both parseDecimal and parseDecimalDigit parseDecimalDigit no longer exists, as it could be replaced by an inline alternative in the 4 places it appeared. This saves one byte overall, as the inline version is 4 bytes, 1 byte more than a call, and removing the function saved 5 bytes. It has been reduced from between 52 and 35 cycles (35 on error, so we'd expect 52 cycles to be more common unless someone's really bad at programming) to 14 cycles, so 2-3 times faster. parseDecimal has been reduced by a byte, and now the main loop is just about twice as fast, but with increased overhead. To put this into perspective, if we ignore error cases: For decimals of length 1 it'll be 1.20x faster, for decimals of length 2, 1.41x faster, for length 3, 1.51x faster, for length 4, 1.57x faster, and for length 5 and above, at least 1.48x faster (even faster if there's leading zeroes or not the worst case scenario). I believe there is still room for improvement, since the first iteration can be nearly replaced with "ld l, c" since 0*10=0, but when I tried this I could either add a zero check into the main loop, adding around 40 cycles and 10 bytes, or add 20 bytes to the overhead, and I don't think either of those options are worth it. * Inlined parseDecimalDigit See previous commit, and /lib/parse.asm, for details * Fixed tabs and spacing * Fixed tabs and spacing * Better explanation and layout * Corrected error in comments, and a new parseHex 5 bytes saved in parseHex, again hard to say what that does to speed, the shortest possible speed is probably a little slower but I think non-error cases should be around 9 cycles faster for decimal and 18 cycles faster for hex as there's now only two conditional returns and no compliment carries. * Fixed the new parseHex I accidentally did `add 0xe9` without specifying `a` * Commented the use of daa I made the comments surrounding my use of daa much clearer, so it isn't quite so mystical what's being done here. * Removed skip leading zeroes, added skip first multiply Now instead of skipping leading zeroes, the first digit is loaded directly into hl without first multiplying by 10. This means the first loop is skipped in the overhead, making the method 2-3 times faster overall, and is now faster for the more common fewer digit cases too. The number of bytes is exactly the same, and the inner loop is slightly faster too thanks to no longer needing to load a into c. To be more precise about the speed increase over the current code, for decimals of length 1 it'll be 3.18x faster, for decimals of length 2, 2.50x faster, for length 3, 2.31x faster, for length 4, 2.22x faster, and for length 5 and above, at least 2.03x faster. In terms of cycles, this is around 100+(132*length) cycles saved per decimal. * Fixed erroring out for all number >0x1999 I fixed the errors for numbers >0x1999, sadly it is now 6 bytes bigger, so 5 bytes larger than the original, but the speed increases should still hold. * Fixed more errors, clearer choice of constants * Clearer choice of constants * Moved and indented comment about fmtHex's method * Marked inlined parseDecimalDigit uses * Renamed .error, removed trailing whitespace, more verbose comments.
2019-10-24 22:58:32 +11:00
or a ;if a nonzero, Z reset
ret nz
cp 1 ;if a is zero, Z reset
ret