<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<div class="moz-cite-prefix">On 02/03/2025 02:00, Marek Tichy wrote:<span
style="white-space: pre-wrap">
</span></div>
<blockquote type="cite"
cite="mid:f3428a50-2d02-42f0-a8bc-eaaa9eca2efd@gn.apc.org">
<pre wrap="" class="moz-quote-pre">I'm also trying to find someone who would join me in pursuing the idea
of having global autonomous digital identities anchored in a web of
trust kept in a permissionless distributed ledger.</pre>
</blockquote>
<p><br>
</p>
<p>There's an enourmous class of security problems that can be
solved by "if only we really knew who everyone was." Which derives
from our anthropology as tribal animals, in which so many of our
normal processes are protected by knowing everyone around us. It's
inbuilt into our brains.<br>
</p>
<p>Unfortunately there isn't a really good technical solution for
that at remote or Internet scale. WoT didn't work in large part
because nobody knew what the T meant. The CA/PKI/x509 industrial
complex didn't really work in large part because their business
model of selling numbers for money didn't align with needs.</p>
<p>That said, there is a long-running thing called Rebooting Web of
Trust (RWOT) which runs like 2 events per year on this goal. This
crowd is strongly aligned with Verifiable Credentials (VCs) and
Decentralised IDentifiers (DIDs). And less strongly with a group
pushing Self-Sovereign Identity (SSI), which seems to have lost
its way, probably because they didn't understand the I nor the T,
nor the business model nor the technology.<br>
</p>
<p><br>
</p>
<blockquote type="cite"
cite="mid:f3428a50-2d02-42f0-a8bc-eaaa9eca2efd@gn.apc.org">
<pre wrap="" class="moz-quote-pre">We need a way to tell AI from humans and yesterday was too late to
switch to a pseudonymous internet.</pre>
</blockquote>
<p><br>
</p>
<p>Would be useful - but hard. Because you're asking a security
question, one has to think in adversarial terms. How would you
attack the system?</p>
<p>The simplest attack is to create a million of the nyms and lie.
Actually AI is very good at lying. And can do it better at scale
than humans. So a simple, first order web of nyms won't work.</p>
<p>Somehow you have to stop the nym holder from lying. The only way
to do that is to make the incentives align such that it's better
for the holder to tell the truth and worse to lie. A general
answer is carrot & stick.</p>
<p>Carrot & stick works well with nation states. But for
reasons, the nation states have trouble working with public keys.
What does work is communities that have some inner strength. For
an example of one that worked, look at CAcert, which these days is
a shadow of its former self, but it did crack the problem of
honesty versus lying, at Internet scale.<br>
</p>
<p><br>
</p>
<p>iang</p>
</body>
</html>