WARNING WARNING WARNING WARNING

This is a long-ass thread, even by my long-ass standards. If you'd prefer to read it on the web, here you go:

pluralistic.net/2020/12/05/tru

WARNING WARNING WARNING WARNING

Security researchers are alarmed: the already-notorious Trickbot malware has been spottied probing infected computers to find out which version of UEFI they're running. This is read as evidence that Trickbot has figured out how to pull off a really scary feat.

1/

To understand why, you have to understand UEFI: a fascinating, deep, philosophical change to our view of computers, trust, and the knowability of the universe. It's a tale of hard choices, paternalism, and the race to secure the digital realm as it merges with the physical.

Computers were once standalone: a central processing unit that might be augmented by some co-processors for specialized processes, like a graphics card or even a math co-processor.

2/

These co-pros were subordinate to the CPU though. You'd turn on the computer and it would read a very small set of hardcoded instructions telling it how to access a floppy disk or other storage medium for the rest of the boot sequence, the stuff needed to boot the system.

The hardwired instructions were in a ROM that had one job: wake up and feed some instructions to the "computer" telling it what to do, then go back to sleep. But there's a philosophical conundrum here.

3/

Because the world of computing is adversarial and networked computing is doubly so: there are people who want your computer to do things that are antithetical to your interests, like steal data or spy on you or encrypt all your files and demand ransom.

To stop this, you need to be able to examine the programs running on your computer and terminate the malicious ones. And therein lies the rub: when you instruct your computer to examine its own workings, how do you know if you can trust it?

4/

In 1983, Ken Thompson (co-creator of C, Unix, etc) was awarded a Turing Award ("computer science's Nobel Prize"). He gave a fucking bombshell of an acceptance speech, called "Reflections on Trusting Trust."

cs.cmu.edu/~rdriley/487/papers

Thompson revealed that he had created a backdoor for himself that didn't just live in Unix, but in the C compiler that people made to create new Unix systems.

5/

Here's what that means: when you write a program, you produce "high-level code" with instructions like "printf("Hello, World!");". Once your program is done, you turn it into machine code, a series of much shorter instructions that your CPU understands ("mov dx, msg" etc).

6/

Most programmers can't read this machine code, and even for those who can, it's a hard slog. In general, we write our code, compile it and run it, but we don't examine it. With nontrivial programs, looking at the machine code is very, very hard.

7/

Follow

Compilers are treated as intrinsically trustworthy. Give 'em some source, they spit out a binary, you run the binary. Sometimes there are compiler bugs, sure, and compiler improvements can be a big deal. But compilers are infrastructure: inscrutable and forgotten.

Here's what Thompson did: he hid a program in his compiler that would check to see whether you were compiling an operating system or a compiler. If you were compiling an OS, it hid a secret login for him inside of it.

8/

· · Web · 1 · 1 · 0

If you were compiling a compiler, it hid the program that looked for compilers or operating systems inside of it.

Think about what this means: every OS you compiled had an intentional security defect that the OS itself couldn't detect.

If you suspected that your compiler was up to no good and wrote your own compiler, it would be compromised as soon as you compiled it. What Thompson did was ask us to contemplate what we meant when we "trusted" something.

9/

It was a move straight out of Rene Descartes, the reasoning that leads up to "I think therefore I am." Descartes' "Discourse on the Method" asks how we can know things about the universe.

He points out that sometimes he thinks he senses something but is wrong - he dreams, he hallucinates, he misapprehends.

If all our reasoning depends on the impressions we get from our senses, and if our senses are sometimes faulty, how can we reason at all?

10/

Descartes wants a point of certainty, One thing he KNOWS to be true. He makes the case that if you can be certain of one thing, you can anchor everything else to this point and build up a massive edifice of trustable knowledge that all hangs off of this anchor.

Thompson is basically saying, "You thought you had descartesed your way into a trustable computing universe because of the axiom that I would never poison your lowest-level, most fundamental tools.

"*Wrong*.

"Bwahahahaha."

11/

(But, you know, in a nice way: an object lesson and a wake-up call before computers fully merged with the physical world to form a global, species-wide digital nervous system whose untrustworthy low-level parts were foolishly, implicitly trusted).

But processors were expensive and computers were exploding. PCs running consumer operating systems like Windows and Mac OS (and more exotic ones like GNU/Linux and various Unices) proliferated, and they all shared this flawed security model.

12/

They all relied on the operating system to be a faithful reporter of the computer's internals, and operated on the assumption that they could use programs supervised by the OS to detect and terminate malicious programs.

But starting in 1999, Ken Thompson's revenge was visited upon the computing world. Greg Hoglund released Ntrootkit, a proof-of-concept malware that attacked Windows itself, so that the operating system would lie to antivirus programs about what it was doing and seeing.

13/

In Decartesspeak, your computer could no longer trust its senses, so it could no longer reason. The nub of trust, the piton driven into the mountainface, was made insecure and the whole thing collapsed. Security researchers at big companies like Microsoft took this to heart.

In 2002, Peter Biddle and his team from Microsoft came to EFF to show us a new model for computing: "Trusted Computing" (codenamed "Palladium").

web.archive.org/web/2002080521

14/

Palladium proposed to give computers back their nub of Descartesian certainty. It would use a co-processor, but unlike a graphics card or a math co-pro, it would run before the CPU woke up and did its thing.

And unlike a ROM, it wouldn't just load up the boot sequence and go back to sleep.

This chip - today called a "Secure Enclave" or a "Trusted Platform Module" (etc) - would have real computing power, and it would remain available to the CPU at all times.

15/

Inside the chip was a bunch of cool cryptographic stuff that provided the nub of certainty. At the start of the boot, the TPM would pull the first stages of the boot-code off of the drive, along with a cryptographic signature.

A quick crypto aside:

Crypto is code that mixes a key (a secret known to the user) with text to produce a scrambled text (a "ciphertext") that can only be descrambled by the key.

16/

Dual-key crypto has two keys. What one scrambles, the other descrambles (and vice-versa).

With dual-key crypto, you keep one key secret (the "private key") and you publish the other one (the "public key"). If you scramble something with a private key, then anyone can descramble it with your public key and know it came from you.

17/

If you scramble it *twice*, first with your private key and then with your friend's public key, then they can tell it came from you (because only your private key's ciphertexts can be descrambled with your public key).

And *you* can be certain that only they can read it (because only their private key can descramble messages that were scrambled with their public key).

Code-signing uses dual-key crypto to validate who published some code.

18/

Microsoft can make a shorter version of its code (like a fingerprint) and then you scramble it with its private key. The OS that came with your computer has a copy of MSFT's public key. When you get an OS update, you can descramble the fingerprint with that built-in key.

If it matches the update, then you know that Microsoft signed it and it hasn't been tampered with on its way to you. If you trust Microsoft, you can run the update.

19/

But...What if a virus replaces Microsoft's public keys with its own?

That's where Palladium's TPM comes in. It's got the keys hardcoded into it. Programs running on the CPU can only ask the TPM to do very limited things like ask it to sign some text, or to check the signature on some text.

20/

It's a kind of god-chip, running below the most privileged level of user-accessible operations. By design, you - the owner of the computer - can demand things of it that it is technically capable of doing, and it can refuse you, and you can't override it.

That way, programs running even in the most privileged mode can't compromise it.

21/

Back to our boot sequence: the TPM fetches some startup code from the disk along with a signature, and checks to see whether the OS has been signed by its manufacturer.

If not, it halts and shows you a scary error message. Game over, Ken Thompson!

It is a very cool idea, but it's also very scary, because the chip doesn't take orders from Descartes' omnibenevolent God.

22/

It takes orders from Microsoft, a rapacious monopolist with a history of complicity with human rights abuses. Right from that very first meeting the brilliant EFF technologist Seth Schoen spotted this (and made the Descartes comparison):

web.archive.org/web/2002100412

Seth identified a way of having your cake and eating it too: he proposed a hypothetical thing called an "owner override" - a physical switch that, when depressed, could be used to change which public keys lived in the chip.

23/

This would allow owners of computers to decide who they trusted and would defend them against malware. But what it *wouldn't* do is defend tech companies shareholders against the owner of the computer - it wouldn't facilitate DRM.

"Owner override" is a litmus test: are you Descartes' God, or Thompson's Satan?

Do you want computers to allow their owners to know the truth? Or do you want computers to bluepill their owners, lock them in a matrix where you get to decide what is true?

24/

A month later, I published a multi-award-winning sf story called "0wnz0red" in Salon that tried to dramatize the stakes here.

salon.com/2002/08/28/0wnz0red/

Despite Seth's technical clarity and my attempts at dramatization, owner override did not get incorporated into trusted computing architectures.

25/

Trusted computing took years to become commonplace in PCs. In the interim, rootkits proliferated. Three years after the Palladium paper, Sony-BMG deliberately turned 6m audio CDs into rootkit vectors that would silently alter your OS when you played them from a CD drive.

26/

The Sony rootkit broke your OS so that any filename starting with $SYS$ didn't show up in file listings, $SYS$ programs wouldn't show up in the process monitor. Accompanying the rootkit was a startup program (starting with $SYS$) that broke CD ripping.

27/

Show newer
Sign in to participate in the conversation
La Quadrature du Net - Mastodon - Media Fédéré

Mamot.fr est une serveur Mastodon francophone, géré par La Quadrature du Net.