[Cryptography] People vs AI

Marek Tichy marek at gn.apc.org
Fri Mar 7 02:51:46 EST 2025


On 03. 03. 25 23:03, iang wrote:
>
>
> On 03/03/2025 18:29, Marek Tichy wrote:
>> Thanks a lot iang for your elaborate answer
>>> There's an enourmous class of security problems that can be solved by
>>> "if only we really knew who everyone was." Which derives from our
>>> anthropology as tribal animals, in which so many of our normal
>>> processes are protected by knowing everyone around us. It's inbuilt
>>> into our brains.
>>>
>>> Unfortunately there isn't a really good technical solution for that at
>>> remote or Internet scale. WoT didn't work in large part because nobody
>>> knew what the T meant. The CA/PKI/x509 industrial complex didn't
>>> really work in large part because their business model of selling
>>> numbers for money didn't align with needs.
>>>
>> It has to be free and radically bottom up.
>
>
> That's hard as you need a lot of code, which needs money to pay for 
> that code, and therefore some form of business model. Normally. I know 
> there is this kind of dream that open source will make things 
> magically self-birth, but open source always seems to succumb to the 
> business model sooner or later.
>
> (I say that as I do actually build one of them myself, sans business 
> model...)
>
Look at how much money is being poured into the AI frenzy. Besides, 
there so much money in the anonymous proof of age it can take you to 
Pluto (remember Thawte selling snake oil for decades, until it became 
Let’s Encrypt snake oil?).
People were able to create the gamble of an unprecedented scale (the 
Bitcon, started on this very list) just by sitting in front of the 
screen. I would not be too worried about this part.
With that said, it should be clear now that this problem inherently 
resist any attempts of monetization and centralization (look for 
instance at Sovrin or Microsoft attempts to solve this - e.g. Microsoft 
Entra Verified ID being the latest in the very long row).
>
>>> That said, there is a long-running thing called Rebooting Web of Trust
>>> (RWOT) which runs like 2 events per year on this goal. This crowd is
>>> strongly aligned with Verifiable Credentials (VCs) and Decentralised
>>> IDentifiers (DIDs). And less strongly with a group pushing
>>> Self-Sovereign Identity (SSI), which seems to have lost its way,
>>> probably because they didn't understand the I nor the T, nor the
>>> business model nor the technology.
>>>
>> I know these guys.
>
>
> Without knowing the ins and outs of DIDs etc, this did seem interesting:
>
> https://ggreve.medium.com/a-future-for-self-sovereign-identity-c237caa5e46f
>
> in that it suggests their implementation is about to lift off, and is 
> properly decentralised, which has been a criticism of others.
>
Thanks for this article, I always felt there may lay something in the 
IPFS direction.
>
>> How about each new DID is validated by at least two already existing
>> DIDs? As part of this initial validation, VCs about some basic
>> properties like name, place of birth, age can be issued.
>>
>> That DID then lives and gradually collects various other, stronger VCs.
>> The service providers can choose what level of certification they
>> require for their service to be available.
>
>
> How do you enforce "two" ? What does the application do with two-ness 
> when it sees it? Is two-ness too strong for some purposes and too weak 
> for others?
>
Think two parents having a child (including the limited capacity and 
rate of doing so). That's what we are modelling.
>
> How does it relate to real life? Do you talk to people at a social 
> gathering if they have two-ness?
>
> My preferred approach to this question is to use micro-communities. 
> Approximately 20-30 people. This way, everyone knows everyone inside, 
> and therefore can inject that knowingness into the technology. I think 
> this works well, but it is hard to do in the West. Much more prevalent 
> in the East, for reasons.
>

>>>> We need a way to tell AI from humans and yesterday was too late to
>>>> switch to a pseudonymous internet.
>>> Would be useful - but hard. Because you're asking a security question,
>>> one has to think in adversarial terms. How would you attack the system?
>>>
>>> The simplest attack is to create a million of the nyms and lie.
>>> Actually AI is very good at lying. And can do it better at scale than
>>> humans. So a simple, first order web of nyms won't work.
>>>
>>> Somehow you have to stop the nym holder from lying. The only way to do
>>> that is to make the incentives align such that it's better for the
>>> holder to tell the truth and worse to lie. A general answer is carrot
>>> & stick.
>>>
>> The carrot in this case would be gaining access. To porn and gamble,
>> ideally.
>>
>> The stick could be pruning entire dishonest branches together with their
>> parents.
>
>
> Right, plenty of carrot. The stick is a harder problem. Just kicking 
> out a dishonest branch isn't going to hurt much if it was built by an 
> AI. This is like the old joke "how do you punish a public key ?  Lop a 
> few bits off ???"
>
Just wipe them out.
>
>>> Carrot & stick works well with nation states. But for reasons, the
>>> nation states have trouble working with public keys. What does work is
>>> communities that have some inner strength. For an example of one that
>>> worked, look at CAcert, which these days is a shadow of its former
>>> self, but it did crack the problem of honesty versus lying, at
>>> Internet scale.
>>>
>> Yeah, I remember CACert issuing free certificates at some conference
>> lobby ages ago. This is similar, but decentralized.
>
>
> Yes. So, a community needs both a purpose, and an infrastructure. The 
> purpose in this case was free certs. The infra that was build was a 
> group of "Assurers" that checked your passport/ID and assigned points. 
> Behind that group of 5000 say, there were other elements:  testing, 
> training, audit, dispute resolution, governance, policy. All those 
> elements were built, and they worked. The dispute resolution was the 
> stick, and it was scary and effective.
>
> Sadly, the free certs thing never worked for that community bc they 
> got blocked out of the browsers. Big long painful story. But the TL;DR 
> for today is that the community needs a primary purpose which isn't 
> the infra. And that purpose needs to be strong enough to build / 
> support / pay for the infra. That's quite tricky.
>
>> I always imagined the DIDs could live in the IOTA Tangle, but I'm less
>> and less sure about that.
>>
>> https://en.wikipedia.org/wiki/IOTA_(technology)
>>
>> Marek
>
>
> Huh. I'm sure I've come across IOTA somewhere before. It is 
> interesting to read the history of these chains, but there sure is a 
> lot of them, more than days in a year it seems.
>
In IOTA, a new incoming transaction has to validate two previous 
transactions. That is THE trick - the Tangle. I'm suggesting the Tingle 
- a new incoming identity has to be validated by at least two already 
valid ones.

Marek
>
>
> iang
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://www.metzdowd.com/pipermail/cryptography/attachments/20250307/aefaeb35/attachment.htm>


More information about the cryptography mailing list