I guess I'm an old fart because I just assumed all this security type stuff was mostly done at the software level. But my machine language knowledge is much more like Z80/68k level or old 386. Although I'd probably still prefer the CPU just execute what it's told to without trying to check on the code anyway, because otherwise it's basically just TPM anyway.
I'm sure it's just general incompetence that's prevalent in the computer industry. I've never researched it, but Apple would be stupid to try to redesign an ARM chip from the ground up. It's probably more of an arrangement like the Xenon PPC chip in the Xbox 360 where Microsoft wanted to own the design so they could alter/scale it for cost in the future.
Apple designed the M1 chip using over $1 billon dollars of alleged R&D over 11 years, but the main feature is speed, and the second feature are special hardware instructions to emulate a intel x86 3 times faster than using pure software emulation, mainly endien byte order swap and such.
Apple wrote fancy LIBRARIES to aid emulation engineers port linux and video games :
The M1 in some configurations for 10,000 dollars is of course the fastest computer you can buy for science computation. [Apple Mac Studio with M1 Ultra]
New CPU benchmarks show that Apple’s powerful M1 Ultra chip outperforms Intel’s 12th-generation Alder Lake Core i9 12900K chip in multi-core performance, and AMD Ryzen 5950x, in both multi-core and single-core performance.
The paper states that the Pointer Authentication module was designed by ARM and released in 2017 as part of the ARMv8 Instruction Set. Usually what happens with ARM chips is a manufacturer (Apple in this case, though there are a bunch of others) will license a particular ARM design and package it with various other peripherals (eg. display controllers, USB, SATA, etc...) and memory to produce a single System On Chip (the M1 chip in this case). The manufacturer owns the SoC design, but part of that design is the CPU portion they licensed from ARM.
It's possible that ARM developed this sort of functionality at the behest of Apple (I've heard rumors that Intel has developed certain x86 functions at the behest of Amazon), but this looks like it's an ARM flaw rather than an Apple one.
Is there a lot of software that is only available for 32 bit ARM? Seems more like a racecar that can't drive on ice- which is to say almost every racecar.
The issue would be old software compiled for 32-bit x86 on Mac. They haven't supported such software for a while, though. 32-bit ARM code is a thing, but I don't think anyone ever used it on a Mac, so there's not a huge case for trying to run it. You probably can emulate it, though.
Since you seem to know a lot more about this stuff than I do: does this exploit affect the new M2 chip they just announced? And of course I've seen a lot of, what seemed to be, circle jerking about the chip on HN about how fast it was, but is that in general purpose usage or in specific scientific computation scenarios? Not a Mac guy at all and was just curious.
the second feature are special hardware instructions to emulate a intel x86 3 times faster than using pure software emulation, mainly endien byte order swap and such.
Aren't they both little endian? ARM used to support big endian and even mixed endian, however I think they dropped all big endian support.
Still, I would think those instructions are generally useful for dealing with peripherals.
Also old fart. I kind of wish I kept along the hardware side of things because when I read stuff like this, my mind is blown:
For decades now, computers have been speeding up processing using what’s called speculative execution. In a typical program, which instruction should follow the next often depends on the outcome of the previous instruction (think if/then). Rather than wait around for the answer, modern CPUs will speculate—make an educated guess—about what comes next and start executing instructions along those lines. If the CPU guessed right, this speculative execution has saved a bunch of clock cycles. If it turns out to have guessed wrong, all the work is thrown out, and the processor begins along the correct sequence of instructions. Importantly, the mistakenly computed values are never visible to the software.
The little fuckers basically time travel these days.
Speculative computation has been in use for at least a couple decades at this point. The AI being used to generate the precompiled logic today is scary. I can forsee a time when someone turns on a brand new computer, is denied access and immediately arrested for future criminal activity based off of what the webcam picks up in the surroundings.
I guess I'm an old fart because I just assumed all this security type stuff was mostly done at the software level.
No, because you really don't want the software to have that kind of access. You could prohibit speculative execution outright, and eat the resulting cost in speed as you have to perform the calculations in strict start-to-finish order, but you can't trust the software to perform these calculations ahead of time and pinky promise to discard that work properly if it's not needed.
I guess I'm an old fart because I just assumed all this security type stuff was mostly done at the software level.
Oh hell no.
Most of the state of the art attacks nowadays either hinge on using speculative execution to get the processor to cough up something it shouldn't, or excessive DRAM writes to induce spontaneous bit errors through electromagnetic coupling.
I guess I'm an old fart because I just assumed all this security type stuff was mostly done at the software level. But my machine language knowledge is much more like Z80/68k level or old 386. Although I'd probably still prefer the CPU just execute what it's told to without trying to check on the code anyway, because otherwise it's basically just TPM anyway.
I'm sure it's just general incompetence that's prevalent in the computer industry. I've never researched it, but Apple would be stupid to try to redesign an ARM chip from the ground up. It's probably more of an arrangement like the Xenon PPC chip in the Xbox 360 where Microsoft wanted to own the design so they could alter/scale it for cost in the future.
Apple designed the M1 chip using over $1 billon dollars of alleged R&D over 11 years, but the main feature is speed, and the second feature are special hardware instructions to emulate a intel x86 3 times faster than using pure software emulation, mainly endien byte order swap and such.
Apple wrote fancy LIBRARIES to aid emulation engineers port linux and video games :
https://developer.apple.com/documentation/apple-silicon/about-the-rosetta-translation-environment
https://news.ycombinator.com/item?id=23613995
The M1 in some configurations for 10,000 dollars is of course the fastest computer you can buy for science computation. [Apple Mac Studio with M1 Ultra]
https://appleinsider.com/inside/mac-studio/vs/compared-mac-studio-with-m1-max-versus-mac-studio-with-m1-ultra
M1 Ultra benchmarks versus Intel 12900K, AMD 5950x and Xeon W: https://www.ithinkdiff.com/m1-ultra-benchmarks-intel-12900k-amd-5950x-xeon/
The paper states that the Pointer Authentication module was designed by ARM and released in 2017 as part of the ARMv8 Instruction Set. Usually what happens with ARM chips is a manufacturer (Apple in this case, though there are a bunch of others) will license a particular ARM design and package it with various other peripherals (eg. display controllers, USB, SATA, etc...) and memory to produce a single System On Chip (the M1 chip in this case). The manufacturer owns the SoC design, but part of that design is the CPU portion they licensed from ARM.
It's possible that ARM developed this sort of functionality at the behest of Apple (I've heard rumors that Intel has developed certain x86 functions at the behest of Amazon), but this looks like it's an ARM flaw rather than an Apple one.
What's the fucking point? It's like having a racecar which is faster than any other, but only if it's on ice.
Is there a lot of software that is only available for 32 bit ARM? Seems more like a racecar that can't drive on ice- which is to say almost every racecar.
The issue would be old software compiled for 32-bit x86 on Mac. They haven't supported such software for a while, though. 32-bit ARM code is a thing, but I don't think anyone ever used it on a Mac, so there's not a huge case for trying to run it. You probably can emulate it, though.
Countless legacy software, games, etc?
Since you seem to know a lot more about this stuff than I do: does this exploit affect the new M2 chip they just announced? And of course I've seen a lot of, what seemed to be, circle jerking about the chip on HN about how fast it was, but is that in general purpose usage or in specific scientific computation scenarios? Not a Mac guy at all and was just curious.
Aren't they both little endian? ARM used to support big endian and even mixed endian, however I think they dropped all big endian support.
Still, I would think those instructions are generally useful for dealing with peripherals.
Also old fart. I kind of wish I kept along the hardware side of things because when I read stuff like this, my mind is blown:
The little fuckers basically time travel these days.
That's really interesting. I need to read up on modern CPU architecture sometime just for my own interest.
Speculative computation has been in use for at least a couple decades at this point. The AI being used to generate the precompiled logic today is scary. I can forsee a time when someone turns on a brand new computer, is denied access and immediately arrested for future criminal activity based off of what the webcam picks up in the surroundings.
No, because you really don't want the software to have that kind of access. You could prohibit speculative execution outright, and eat the resulting cost in speed as you have to perform the calculations in strict start-to-finish order, but you can't trust the software to perform these calculations ahead of time and pinky promise to discard that work properly if it's not needed.
Oh hell no.
Most of the state of the art attacks nowadays either hinge on using speculative execution to get the processor to cough up something it shouldn't, or excessive DRAM writes to induce spontaneous bit errors through electromagnetic coupling.