[Cryptography] NSA and Tor was Updates on Durov charges in France
efc at disroot.org
efc at disroot.org
Mon Sep 9 16:41:08 EDT 2024
On Mon, 9 Sep 2024, Phillip Hallam-Baker wrote:
>
>
> On Sat, Sep 7, 2024 at 9:34 PM efc--- via cryptography <cryptography at metzdowd.com> wrote:
>
>
> On Sat, 7 Sep 2024, Christian Huitema wrote:
>
> > relying on mega-scalers has its own problems: it contributes to more
> > concentration on the Internet, and even if we believe that these big
> > mixers are not somehow doing surveillance capitalism, they become an
> > attractive point for legal attacks. So maybe as a general practice we
> > ought to rely on a large number of medium size relays, instead of just a
> > few big ones.
>
> One question when it comes to public encrypted services that I think is
> neglected, is project and legal governance.
>
> Companies can easily be shut down, or opened up by law enforcement.
> Individual programmers can be threatened, open source projects can be
> infiltrated.
>
> Once you're in the inside, it is way easier to attack a project.
>
> How would you protect against those types of attacks? Is there anything
> organizational or legal, one can do, to reduce the possibility of those
> things?
>
>
> We have seen a number of infiltration efforts and they all have the whiff of being a nation state actor.
>
> Bottom line being that only a nation state actor is likely to have the patience required for the insanely long kill chains involved.
> Like eight steps just to compromise SSH which is merely a stepping stone to compromise other things.
>
> There are two cases of note:
>
> 1) Infiltration
> 2) Compromise of a contributor's machine.
>
> Infiltration is an attack which is probably only feasible for nation state actors. Not least because anyone who infiltrates an open
> source project is likely to have some individuals with a very personal interest in taking them down hard. It is also pretty difficult
> to maintain. So not saying it doesn't happen just that compromising a machine is easier.
I wouldn't be so sure. Corporations can be mighty long term, and some
companies are older than many countries todays. I imagine for instance,
the cyber security arm of some no-name corporation with nations and
various secret services as clients.
Countries have legal checks and balances, so certain jobs are better to
outsource to these companies. So for them, having a couple of sleeper
agents in key strategic open source projects could be a nice thing to
have.
I remember when I was working with RSA and they told me that they have
guys who monitor "blackhat" forums on the darknet, complete with
reputations, hacks and what ever, in order to catch early warning signs or
try to catch hackers bragging on the darknet.
Another data point, is a neighbour of mine who used to work for the secret
military arm (don't remember the name). While he was working, he was
grumpy and didn't say much. Towards the end of his life, when a bit drunk,
he would sometimes let interesting details slip, such as how they set up
shell companies to shield themselves from any action, and how to pay
agents without traces back to the government.
> For the second case, what I would like open source projects to start doing is signing every commit with a key tied to the specific
> development machine. So if Fred's old PC is compromised and some malicious updates are added, we can instantly identify the commits
> from that specific device.
>
> To make that happen, it would be helpful for GitHub to make the ability to use SSH key signing keys a free feature rather than
> restricting it to the enterprise tier.
I think the fact that Github is centralized and web based alrady makes it
start out completely wrong when it comes to security. It's like starting
with something bad, and then trying to patch it, but since you start out
with a bad structure, you will never be able to fix it 100%.
More information about the cryptography
mailing list