<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">On 31/12/2013 06:52 p.m., Krisztián
Pintér wrote:<br>
</div>
<blockquote cite="mid:112446395.20131231225253@gmail.com"
type="cite">
<pre wrap="">
Sergio Lerner (at Tuesday, December 31, 2013, 9:35:23 PM):
</pre>
<blockquote type="cite">
<pre wrap="">I've been playing with a property I named Strict memory hard hash
functions. Strict memory hard functions are an extension of memory hard
<a class="moz-txt-link-freetext" href="http://bitslog.files.wordpress.com/2013/12/memohash-v0-1.pdf">http://bitslog.files.wordpress.com/2013/12/memohash-v0-1.pdf</a>
</pre>
</blockquote>
<pre wrap="">
but my question is: you are aware of the ongoing password hashing
competition <a class="moz-txt-link-abbreviated" href="http://www.password-hashing.net">www.password-hashing.net</a> , aren't you?</pre>
</blockquote>
Yes, but I'm unsure if SeqMemoHash can compete with scrypt in
password hashing: password hashing does not require a hard
(theoretical) limit in the memory required, just an economic
incentive not to use less memory. So SeqMemoHash will be slower than
any "weakly mem limited" competitor. Nevertheless, I've not done
performance comparisons side-by-side, and I will.<br>
<br>
<blockquote cite="mid:112446395.20131231225253@gmail.com"
type="cite">
<pre wrap="">
i see one downside though. if i understand correctly, you need N^2
time if we want N memory blocks (blocks being a reasonable output size
of some PRF). </pre>
</blockquote>
No, you don't actually need N^2 time. As far I can see, since the
amount of temporary memory increases by each "backtrack", the amount
of "backtracks" each round has to do increases at each depth. So
with only 35 rounds (for any value of N>35) , the first round
(last round in the backtrack) will be evaluated 35! times, which is
equivalent to 128-bit security. So for N>35 the amount of time
grows linearly with the amount of memory, since no more than 35
rounds are needed.<br>
<br>
<blockquote cite="mid:112446395.20131231225253@gmail.com"
type="cite">
<pre wrap="">it either limits us severely in memory use if we aim for
"regular" block sizes like 256 to 512 bits, or requires huge block
size. </pre>
</blockquote>
As I said previously, the compression function internal state size
does not need to grow (in SeqMemoHash). In TreeMemoHash I think it's
true, but it could be improved.<br>
<blockquote cite="mid:112446395.20131231225253@gmail.com"
type="cite">
<pre wrap="">the latter approach seems to ease the problem to some degree.
can we consider like 65536 bit or higher block sizes? does that hurt
security? maybe we need to consider the internals of such a huge PRF?
</pre>
</blockquote>
Maybe the Keccak sponge design can be securely used for any desired
state memory size (the parameter <i>b</i>, in the Keccak design).<br>
<br>
Best regards, Sergio.<br>
</body>
</html>