18

I know the Atari's FP package used BCD for rather dubious reasons, but does anyone know of other examples of basic "operating system" level code on common platforms that used BCD?

I suspect BCD was added due to the history of using micros in calculator roles, but I am wondering if this is something that was common in the home computer role.

By common platform I refer to Apple II, BBC, Commodore, etc.

16
  • 2
    I had a quick search through disassembles of the BBC MOS and Vic-20 ROMs that I happened already to have on disk; not a single SED to be found. That feels like too superficial a test to roll into an answer though.
    – Tommy
    Commented Jul 31, 2019 at 18:09
  • 2
    Thanks Tim. Do you know why either RS or TI chose to use BSD? Was there some advantage on the Z80 over the 6502? Commented Jul 31, 2019 at 18:54
  • 5
    I wouldn'T call it dubious. The advantage of using BCD over binary is simply that error margin is ... well ... decimal :)) Results will not diverge due binary artefacts. Using BCD is elementary if the goal is to produce exact FP - exact in the sense as it's results will be the very same, no matter if done on a computer or any classic way. So all these companies just wanted to have their computers work as expected by teachers or mathematicans.
    – Raffzahn
    Commented Jul 31, 2019 at 19:10
  • 2
    What does "operating system" in scare quotes mean? What kind of operating system would have any reason to perform BCD calculations? BCD is for application code--especially when the application is expected to give exactly the same result that a pencil-and-paper algorithm or an old-fashioned mechanical calculator would have given. Commented Jul 31, 2019 at 19:59
  • 1
    @Raffzahn score*
    – JeremyP
    Commented Aug 1, 2019 at 14:43

3 Answers 3

17

(Only partly my answer, as the important list is a collection of what has been found and noted by others in comments to the question - I thought putting it in an answer would be helpful to others looking)

I know the Atari's FP package used BCD for rather dubious reasons,

I wouldn't call it dubious, but rather obvious. Using BCD based FP will always return the same result as done 'by hand' as there will be no binary artefacts. All roundings due to precision will be the same as by 'classic' methods, thus yielding the 'right' result.

but does anyone know of other examples of basic "operating system" level code on common platforms that used BCD?

I guess "operating system level" in this code covers what usual home computers of that time had in ROM. Right?

Now the list:

According to Tim Locke, systems using BCD-FP were

  • Kyotronic 85 (Tandy M100/102, etc.) with a 8085 CPU.
  • TI 99/4 (9900CPU)
  • MSX (!) on Z80

All of them were 8 byte implementation with 14 digit precision (Atari was 6 byte/10 digit). MSX-BASIC also offered a 4 byte, 6 digit single format. MSX-BASIC is also is quite notable as it's based on Microsoft BASIC 4.5 which had binary FP. So there must have been an explicit request to change to decimal based FP (*1).

In addition Tim mentions the quite marvellous Kyan-Pascal, available for several 6502 machines including Atari 8 Bit, Apple II and Commodore C64. Its real format was also BCD-based with a length of 8 bytes and 13 valid digits (*2). This is especially remarkable as Kyan produced (in comparison) exceptional fast code.

I suspect BCD was added due to the history of using micros in calculator roles

Isn't any FP use a 'calculator' one?

but I am wondering if this is something that was common in the home computer role.

Rather not. It may be safe to assume it's just by intention due to the targets set for development. If the designers / customers thought of producing 'right' results as a design goal, then it was.

This is as well the reason why Microsoft used BCD-FP for Multiplan. After all, it'd be quite odd if some super expensive modern computer with even more expensive software would produce a result that differs from what the old worn-out mechanical calculator says - and is obviously wrong when checking (*3).

Thus, offering some package with 'right' precision was a valid sales argument. I remember many articles in (micro) computer papers back then musing about artefacts and errors introduced into mathematics by usage of binary FP and how to avoid them.


On a side note, Jeremy P mentions that Pet Space Invaders on the Commodore did use BCD for some calculation. Not OS level, but quite interesting.


*1 - I wouldn't be surprised if the FP handling was taken from their COBOL compiler (*4) or Multiplan, as both used BCD as well - BCD is a requirement for COBOL anyway, and it makes a quite sense for a spreadsheet.

*2 - In difference to all other BCD-FP here, the exponent was also stored in BCD - others used binary to increase range (works as it's a fixed point value)

*3 - Never bet on users not finding a hidden error - anyone remember the Pentium Bug?

*4 - Fun fact, MS-COBOL wasn't just some product for customers with old business code, but used a lot within (early) MS products for micros - take Sort as an example :))

5
  • 2
    Though not implemented on 6502, ANSI Full BASIC required the option of decimal floating point with its OPTION ARITHMETIC DECIMAL. MS have toyed with ANSI compatibility - their short-lived version of BASIC for Mac came in both decimal and binary FP versions
    – scruss
    Commented Aug 1, 2019 at 1:33
  • 5
    On "calculator use" - most pocket/desktop calculators use BCD arithmetic because it is required for financial calculations in certain regulations. A calculator that does not do so cannot legally be used for certain financial calculations, because it may sometimes produce slightly different results from a hand calculation that uses decimal rounding at each step. Financial auditors worry much more about errors of a penny or two than about errors of a million dollars, because the latter is obviously a mistake; the former may imply malfeasance.
    – Chromatix
    Commented Aug 4, 2019 at 19:28
  • 3
    @Chromatix Reminds me about a man I learned a lot from used to say: The basic difference between an engineer and an accountant is about what they see as significant digits - the engineer will always check the first two to validate a result, while an accountant always checks the last two.
    – Raffzahn
    Commented Aug 4, 2019 at 19:32
  • 1
    Sharp pocket computers, PC-1211, PC-1500, PC-1600 and all the Sh-61860 based ones (PC-125x, PC-140x, PC-1350, etc..) all used BCD floating points. 8 bytes, 13 digitis mantissa, 2 digits exponents, last nybble used for signs. Commented May 29, 2020 at 6:35
  • Sorry tardy reply - "but rather obvious" - in some contexts yes, but given that their goal was to produce a version of MS BASIC (for all intents) and that they had previously written several BASICs without BCD, the move in this case was extremely questionable and the truely poor performance of the library was the result. Commented Jun 1, 2020 at 18:52
4

The KIM Math package used packed BCD for storage and unpacked decimal for computation. I used to have a vintage bound copy of the source code, but sold it to a collector. I have no idea whether any loadable-program computers included the KIM Math routines in ROM for use by loaded programs, but the package was published by MOS Technologies--makers of the 6502--so it would seem that the chip designers were expecting people to use the BCD for math work.

Also, I suspect that operating systems that need to output multi-digit values in decimal format may use BCD mode, because BCD to decimal conversion can be done rather compactly.

For values 0-999:

    sed
    lda #0
    sta desth
    ldx #16
lp:
    asl srcl
    rol srch
    adc #0
    rol desth
    dex
    bne lp
    sta destl
    cld
    rts

Values up to 25,500 could be accommodated by using the above to process the bottom two digits, and moving desth to srcl, and repeating the procedure. Bigger values can be easily accommodated if one can afford temporary space the same size as the source.

2

Visicalc for the Apple II used "a variation of decimal arithmetic" which I assume means BCD.

The reason they give is the same reason decimal arithmetic was used on mainframes:

[...] so all money values could be represented exactly, with no funny behavior common at the time from binary floating point.

And this would probably also be the reason for the usage of BCD anywhere else, so you must look for applications or languages geared towards business. (E.g. a COBOL compiler would be another good place to look).


As for 'basic "operating system" level code', am I not sure what this is supposed to be. E.g. on the Apple II, the disk operating system only dealt with disks, as the name says. It didn't deal with floating point or any other number representation.

The ROM of homecomputers often included BASIC, which ofen had floating point (and no kind of decimal arithmetic), but again, I wouldn't count this as "operating system".

1
  • If I recall, Visicalc used base-100 floating point, which could offer much better performance than base 10, but had highly variable relative precision.
    – supercat
    Commented Nov 9, 2020 at 23:30

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .