lockable trapdoor one-way function

Jerrold Leichter jerrold.leichter at smarts.com
Mon Dec 1 11:35:56 EST 2003


| Does anyone know of a trapdoor one-way function whose trapdoor can be locked
| after use?
|
| It can be done with secure hardware and/or distributed trust, just delete
| the trapdoor key, and prove (somehow?) you've deleted it.
|
| It looks hard to do in "trust-the-math-only" mode...
You're going to have to be much more explicit about the details to have any
hope of getting at this.  If you take too naive an approach, even a simple
trapdoor function is impossible.  "Given f(X), it's impractical to compute
X".  However, whoever computed f(X) can of course give you X!  So one sets
it up as something like:  If we have any probabilistic Turing machine T
(subject to some constraints on size and running time) which is given an input
tape containing f(X), then over all X and all runs of the machine, the
probability that it halts with the tape containing X is less than (some limit
"not much bigger than" the probability of just just guessing X correctly).
This is the only way I know of to capture the notion of T "is only given f(X)".
But this whole approach can't possibly model the notion of "forgetting" X,
since the only way you can *prove* you've forgotten something is to display
that *all* possible memory you have access to, or will have access to in the
future, is (say) zeroed.

You could certainly spread the trust out.  For example, one could imagine a
trap door computed using shared-secret techniques:  The trap door information
is shared among K participants.  The K participants jointly compute the trap
door function.  Even *after* the computation is complete, it still takes all
K participants to compute the function.  So if we now tell all the participants
to forget their portion of the secret, if you trust *any* of them to do so,
the trap door information is gone.

Even this has a subtlety:  What's to prevent a participant - or anyone else? -
from storing <X,f(X)> pairs?  So, on the one hand, you may need to state the
requirement in terms of being able to compute f(X) or its inverse on data that
had never been presented to the K participants.  Alternatively, you could
insist that the K participants individually never see X or f(X).  (But then
who *does* see them to distribute and finalize them?)

There's the seed of an interesting primitive here, but exactly what it is
seems difficult to pin down.  What application do you have in mind for such a
thing?
							-- Jerry

---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo at metzdowd.com



More information about the cryptography mailing list