[Cryptography] Encodings for crypto

Phillip Hallam-Baker hallam at gmail.com
Tue Feb 18 11:28:33 EST 2014


On Tue, Feb 18, 2014 at 7:54 AM, ianG <iang at iang.org> wrote:

> I think we can do a lot lot better.  I have a document somewhere on
> this, but in brief:
>
> There are too many primitives.  I see 11 doing numbers alone!  In
> practice, in network protocols, we do not need bignums, we do not need
> floats and we do not need negatives.


But part of the exercise here is to try to get to convergence on one single
encoding by meeting all needs. I don't see a lot of need for compression or
for 128 bit floats. But some people need them for their data work. Hence
JSON-C and JSON-D.

I understand the temptation to go optimizing bits, but I don't think there
is any point in my current applications which are JSON Web Services over
HTTP. However, right at the end of this over-long message I show a
situation where bit grinding does provide real value.


What we could do is to either define a profile that drops unneeded features
or have applications state which features they actually use.

If we were doing a crypto framing scheme then it would make sense to say
that the bits on the wire must only use the binary tags and that only
positive integers, strings and binary data chunks are used. But the
documentation could still use the JSON representation for illustrative
purposes/



> Then, for different sized numbers,
> we should remember that we are about simplification and higher level
> concepts.  Which should tell us we need a number.  Not the four
> remaining of 8, 16, 32, 64 bits which are hangovers from the hardware
> days.  We need one number that simply expands to fill the needs.  This
> is done with the 7 bit trick, where the high bit being set says there is
> a following extra byte.
>

The problem with the 7 bit trick is that it requires the following bytes to
be shifted about, same with the similar UTF8 scheme. The JSON data model is
simple enough not to need more than one byte for the tag and length of data
to follow combined.

Allowing any number of bytes in an integer from 1 to 8 requires 3 bits
which is a lot. Restricting the number of bytes to 1,2,4,8 saves a bit. But
that is all that is going on. Using a 16 bit representation on the wire
does not mean that the corresponding data model representation is 16 bit.
It is just that the number either does or does not fit into 16 bits.

Since the JSON data model has negative integers it is necessary to deal
with signs. Which means either twos compliment or a sign bit. I prefer a
sign bit because it is a lot simpler to code and avoids confusion between
representations.

So I would argue that there are actually only two number representations in
JSON-B, Integer and float.


If a protocol involves integers bigger than 32 bits then ultra-compact
representations are probably not a priority. That is why a 16 bit length is
used for bigints, while it is virtually certain that a bigint will fit into
a 8 bit length, saving the extra byte is not worth the extra code point.

I agree that floating point values are not necessary for protocols and
arguably neither are negative integers. But they are part of the JSON data
model which means they have to be supported.



> Next, we should really be thinking in OO terms.  When we are dealing in
> OO, we have a single object that 'knows' its output, and its input
> intimately.  Which is to say, it knows whereas your spec does not.  The
> object can do semantics such as range checking and small composition
> such as conversion of byte arrays to strings.
>

Well the code is in C# so that is how the encoder/decoder works. Not sure
what you are getting at here.



> We do however need a sequence of bytes, and a byte array is constructed
> simply with a length (number as above) and a sequence of bytes.  That's
> 2 primitives so far.
>

Sounds like you are doing a compact binary version of LISP S-Expressions.
Which is another totally valid approach.



> One more thing.  Without a seriously good testing and debugging system,
> this whole area breaks down with complexity, which is why people eschew
> binary and go for text.  There is a loopback technique I use to solve
> this issue, which I call the Ouroboros pattern.  In short it is this:
>
> 1.  each object has an example() method which produces a correct object
> with with each field constructed randomly.
> 2.  write that out through the stream process.
> 3.  read it back in to a new object, using the converse stream process.
> 4.  compare the two objects with standard equals() method.
>
> Run 2^5 times for each class.  Due to composition and repeated tests
> this solves the complexity issue.
>

Yep, regression testing. Have not got round to writing this yet but I plan
to. I wanted to re-implement the scheme in C first. Then I can generate the
same data inputs for C and C# and check that they produce the same outputs
and decoding produces the original data.

Probably the key point here is that if you are sill thinking about
> protocols along the old bits & Bytes way that Richard highlighted,
> you're missing out.  Doing protocols with OO thinking is so much easier
> you never ever go back.  Once you do that, all of the formats, ideas,
> layouts, MLs start to look a little .. 20th century, steampunk, historical.
>

I am not sure about the OO thing since I was doing OO back in the days when
it really was message passing and then C++ came along and made a mess of
the ideas.

What we are really doing with a protocol compiler is writing the on the
wire format for messages passed between concurrent network objects.

Further, in a message framework, those messages correspond to method
invocations on the objects. So the (abreviated) signature of a Transaction
in the Confirmation protocol is:

    Transaction Enquirer Confirmation ConfirmationRequest
ConfirmationResponse
        Description
            |Post a request for confirmation to a user.
        Status Success
        Status Refused
        Status UnknownUser

    Message ConfirmationRequest
        Description
            |Request a confirmation from a specified user.
        String Account
            Required
            Description
                |The user being asked to provide confirmation.
            Description
                |The format of the account identifier is the same as for
email,
                |i.e. <username>@<domain>
        String Text
            Required
        String Option
            Multiple

When this is translated into C# we get the following class (abreviated)

public partial class ConfirmationRequest : CNF {
public string Account;
public string Text;
public List<string> Option;

Or in C, the following structure:

typedef struct _CNF_ConfirmationRequest {
struct _CNF_ConfirmationRequest *_Next;
int _Type;
JSON_String Account;
JSON_String Text;
JSON_Group Option;
} CNF_ConfirmationRequest;

It turns out that most IETF protocols can be implemented as a sequence of
RPC calls between a client and a server. The main exceptions being Jabber
(XMPP) and IRC which are essentially reverse RPC calls. The client opens up
a context and then receives a series of RPC calls from the server. I have
looked into that but to support that communication pattern in a web service
I either need to use an extended HTTP that supports multiple streams (e.g.
HTTP/2.0) or write my own.


Although PROTOGEN was originally designed to be a protocol synthesizer just
for JSON, writing a C library means that I need to be able to make HTTP
calls out which in turn means either calling someone else's API and deal
with a different implementation of strings etc. etc. and a lot of code I
don't need or writing my own very simple HTTP implementation.

So last night I wrote a PROTOGEN schema for HTTP (or at least the parts I
care about) and I hope to have it done this week.

Protocol Goedel.HTTP HTTPG

Structure Common
String Content_Type
Integer Content_Length
DateTime Date

Structure Request
Inherits Common
String Accept
Multiple
String Authorization
Multiple
[etc]

One side product of this work is that I can now encode HTTP messages in
JSON syntax rather than the traditional RFC822 style headers. Which in turn
means that I can write a Web service that only uses one encoding for both
the wrapper and the contents and since the compiler supports JSON-B (or
soon will) I can optimize a protocol for space by simply setting the
encoding flag to JSON-B rather than JSON.

If we went a step further and encoded the SSL layer in the same encoding,
ditching the ad-hoc SSL/TLS encodings this would provide consistency all
the way down to the IP layer.

At this point Ian's proposal for a totally minimal binary encoding with no
frills suddenly looks very attractive. The HTTP layers don't need floats or
the like so ditch them.

-- 
Website: http://hallambaker.com/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.metzdowd.com/pipermail/cryptography/attachments/20140218/35501dcd/attachment.html>


More information about the cryptography mailing list