[Cryptography] Unicity distance of Playfair

mok-kong shen mok-kong.shen at t-online.de
Mon Mar 28 13:38:31 EDT 2016


Am 27.03.2016 um 01:57 schrieb Ray Dillinger:
>
>
> On 03/26/2016 07:10 AM, mok-kong shen wrote:
>
>> As is apparent from my OP, my knowledge of unicity-distance is very
>> poor. Thus I continue to wonder whether one could in computing the
>> unicity-distance of a scheme the fact that in it diagrams are being
>> processed in each step instead of single characters. For, as a
>> parallel, in cryptanalysis one could study not only single letter
>> frequency distribution of the alphabetical characters but also
>> additionally the frequency distributions of digrams, which provides
>> correspondingly additional informations.
>
> The redundancy is an estimate of how much information is available
> from all those sources taken together.  Distribution, pairs, triples,
> word frequencies, the whole business.  English conveys about 1.5 bits
> per letter - the remaining representation bits, ie, the redundancy,
> are 'redundant' because that information is both explicit in those
> bits and implicit in that you could predict them on the basis of
> all that other information.
>
>> .... Take the
>> case of the classical poly-alphabetical substitution with a 26*26 table.
>> That table could have columns that are sufficiently random on the
>> one side (normally termed "with mixed alphabets") or have columns that
>> are simply shifted versions of the the alphabet in its natural order
>> (normally termed a Vigenere table) on the other side. Now the cipher in
>> question is evidently stronger in the first case than in the second
>> (special) case. However, the unicity-distance would be the same for
>> both cases.  Am I right in this?
>
> I don't think so - a mixed-alphabet Alberti cipher seems a *lot*
> harder when solving it by hand.  I believe the mixed-alphabet
> multiplies the number of different keys effectively. I've never come
> up with a solution to one that used a mixed-alphabet different
> from the one the encryptor used, and if it were redundant I think
> I would have.
>
> But it often happens with classic hand ciphers that a mixed-alphabet
> substituted for a plain one doesn't actually increase the number
> of possible keys.  For example when you compose two monoalphabetic
> ciphers you don't get a cipher with 26! times as many different
> keys (and longer unicity distance), you just get a cipher with
> 26! different ways to write down each key (and the same unicity
> distance).

To make my argument clear: Compare the general case of poly-alphabetical
substitution (with a 26*26 matrix where the coulmns are fairly random
permutations of the alphabet) and the special case (namely Vigenere,
with a 26*26 matrix where the first column is the alphabet as such and
each successive column is a rotation by one place of the preceeding
column). Evidently the number of possible keys of the two cases are
exactly the same. But also evident is the fact that the general
case should be stronger. Now my question is: How could this difference
be properly reflected in the unicity-distance computation of the two
cases? (My current conjecture is: There is no way.)

M. K. Shen



More information about the cryptography mailing list