[Cryptography] GHCQ Penetration of Belgacom

Jerry Leichter leichter at lrw.com
Tue Dec 23 19:12:21 EST 2014


On Dec 23, 2014, at 6:00 PM, Dennis E. Hamilton <dennis.hamilton at acm.org> wrote:
>   I believe there was hardware that did this for addition and subtraction,
>   with all of the operations working in decimal serial, not unlike you
>   and I using common pencil-and-paper methods for arithmetic, including
>   multiplication and division.  
Ah, yes, the IBM 1620 - sometimes affectionately known as CADET, for Can't Add Doesn't Even Try.

Memory consisted of decimal digits.  (To be accurate, memory consisted of 5-bit (wired core) units.  One bit was the "flag" bit, which indicated the most significant end of a field - typically a decimal number.  A field always had at least two digits, addressed by its least-significant end (which had the highest address); the flag on the least significant digit indicated a negative value.  The remain four bits encoded the decimal digits, a special record mark (used to define groups of fields) and a group mark (used to define groups of records - mainly for I/O).  The remain four combinations were unused; they would not be produced by any normal operation, and if read from memory caused the machine to stop.  (The 1620 was a single user desk machine.  Not desk top - it was an entire rather large desk, complete with an embedded pre-Selectric IBM typewriter that formed the "console".)

Addition was done using addition tables that had to be loaded into memory:  Given two digits n and m, the hardware looked at location 0001nm in memory.  It had to contain n+m mod 10 as the digit; its flag indicated if there was a carry.   Some of the most complex wired logic in the machine handled adding in the carry's, and complementing mod 10 to do subtraction (or addition of negative numbers, of course).  Addition was done digit pair by digit pair, just as you would do it on paper.  The addends could be of arbitrary length; the result, as I recall, replaced the first.  You had to make sure there was enough room - the input from a terminated field was just forced to be zero, but the output went right to memory beyond the end of the field.

Multiplication was repeated addition, again as you would do it on paper.  I can't remember the exact details, but the multiply tables went from 000200 to 000399, with, of course, two-digit results.  Division was repeated subtraction.  The region of memory going down from 000099 (and wrapping back around to the top of memory for really long results) was the target of any multiplication, and the  dividend for division.

As I said above, the special locations from 000100 to 000399 were reserved for use as tables, but it was up to you to populate them.  If you didn't, arithmetic ... didn't work.  I actually loaded tables to do binary addition for a truth-table evaluator I wrote.

Interesting early (1960's) machines.  The only machines ever built, as far as I know, that did arbitrary-length arithmetic in hardware.  There was a complete floating point library that came with FORTRAN - decimal mantissa with power-of-ten exponent.  Like the hardware, it allowed arbitrary lengths for the mantissa.  (No one had a use for more than two-digit exponents, though supporting that would be easy enough.)  The FORTRAN (that would be FORTRAN II) compiler had a "FANDK" control card on which you specified the sizes to be used for floating and integer variables.  You could also specify a "noise digit", which would be shifted in on left shifts of floating point values.  Try your computation with noise digits of 0 and 9 and you can quickly see how many trailing digits of your output was just noise.  In general, decimal FP is a much better match to people's intuition than binary FP, so some of the common traps for the unwary in FP arithmetic aren't there.  And the ability to just add some more digits to the mantissa and see how things change is great.

Memory was 20,000, 40,000, or 60,000 decimal digits.  You could get a disk that was, I think, 1.5 million decimal digits.  (We're talking one of the washing-machine size units here.)  Available peripherals were a card reader/punch, a paper tape reader/punch, and a line printer.

Slow?  I once calculated that a 1620 was slower, on an equivalent addition, than one of the original vacuum tube computers.  Still ... it was a workable "personal" computer, and people did some significant engineering work on them.  (There was a "Model II" version which was at least twice as fast in general, and also came with an actual decimal adder, rather than using the add tables.  Multiplication and division were unchanged, however.)

People used to race the FORTRAN compiler.  Good coders won.  (It helped that the instruction set was very simple and logical.)

>   For the programmed check-on multiplications and divisions, I wonder
>   how often a problem was detected and what was done about it. 
I don't recall ever seeing this done.  Given how slow the machine was to begin with, it doesn't seem to be the kind of thing that would be used on any regular basis.
                                                        -- Jerry




More information about the cryptography mailing list