Apple designed the M1 chip using over $1 billon dollars of alleged R&D over 11 years, but the main feature is speed, and the second feature are special hardware instructions to emulate a intel x86 3 times faster than using pure software emulation, mainly endien byte order swap and such.
Apple wrote fancy LIBRARIES to aid emulation engineers port linux and video games :
The M1 in some configurations for 10,000 dollars is of course the fastest computer you can buy for science computation. [Apple Mac Studio with M1 Ultra]
New CPU benchmarks show that Apple’s powerful M1 Ultra chip outperforms Intel’s 12th-generation Alder Lake Core i9 12900K chip in multi-core performance, and AMD Ryzen 5950x, in both multi-core and single-core performance.
The paper states that the Pointer Authentication module was designed by ARM and released in 2017 as part of the ARMv8 Instruction Set. Usually what happens with ARM chips is a manufacturer (Apple in this case, though there are a bunch of others) will license a particular ARM design and package it with various other peripherals (eg. display controllers, USB, SATA, etc...) and memory to produce a single System On Chip (the M1 chip in this case). The manufacturer owns the SoC design, but part of that design is the CPU portion they licensed from ARM.
It's possible that ARM developed this sort of functionality at the behest of Apple (I've heard rumors that Intel has developed certain x86 functions at the behest of Amazon), but this looks like it's an ARM flaw rather than an Apple one.
Is there a lot of software that is only available for 32 bit ARM? Seems more like a racecar that can't drive on ice- which is to say almost every racecar.
The issue would be old software compiled for 32-bit x86 on Mac. They haven't supported such software for a while, though. 32-bit ARM code is a thing, but I don't think anyone ever used it on a Mac, so there's not a huge case for trying to run it. You probably can emulate it, though.
Since you seem to know a lot more about this stuff than I do: does this exploit affect the new M2 chip they just announced? And of course I've seen a lot of, what seemed to be, circle jerking about the chip on HN about how fast it was, but is that in general purpose usage or in specific scientific computation scenarios? Not a Mac guy at all and was just curious.
the second feature are special hardware instructions to emulate a intel x86 3 times faster than using pure software emulation, mainly endien byte order swap and such.
Aren't they both little endian? ARM used to support big endian and even mixed endian, however I think they dropped all big endian support.
Still, I would think those instructions are generally useful for dealing with peripherals.
Apple designed the M1 chip using over $1 billon dollars of alleged R&D over 11 years, but the main feature is speed, and the second feature are special hardware instructions to emulate a intel x86 3 times faster than using pure software emulation, mainly endien byte order swap and such.
Apple wrote fancy LIBRARIES to aid emulation engineers port linux and video games :
https://developer.apple.com/documentation/apple-silicon/about-the-rosetta-translation-environment
https://news.ycombinator.com/item?id=23613995
The M1 in some configurations for 10,000 dollars is of course the fastest computer you can buy for science computation. [Apple Mac Studio with M1 Ultra]
https://appleinsider.com/inside/mac-studio/vs/compared-mac-studio-with-m1-max-versus-mac-studio-with-m1-ultra
M1 Ultra benchmarks versus Intel 12900K, AMD 5950x and Xeon W: https://www.ithinkdiff.com/m1-ultra-benchmarks-intel-12900k-amd-5950x-xeon/
The paper states that the Pointer Authentication module was designed by ARM and released in 2017 as part of the ARMv8 Instruction Set. Usually what happens with ARM chips is a manufacturer (Apple in this case, though there are a bunch of others) will license a particular ARM design and package it with various other peripherals (eg. display controllers, USB, SATA, etc...) and memory to produce a single System On Chip (the M1 chip in this case). The manufacturer owns the SoC design, but part of that design is the CPU portion they licensed from ARM.
It's possible that ARM developed this sort of functionality at the behest of Apple (I've heard rumors that Intel has developed certain x86 functions at the behest of Amazon), but this looks like it's an ARM flaw rather than an Apple one.
What's the fucking point? It's like having a racecar which is faster than any other, but only if it's on ice.
Is there a lot of software that is only available for 32 bit ARM? Seems more like a racecar that can't drive on ice- which is to say almost every racecar.
The issue would be old software compiled for 32-bit x86 on Mac. They haven't supported such software for a while, though. 32-bit ARM code is a thing, but I don't think anyone ever used it on a Mac, so there's not a huge case for trying to run it. You probably can emulate it, though.
But it's ARM anyway. It can't run x86 of any sort.
Countless legacy software, games, etc?
Since you seem to know a lot more about this stuff than I do: does this exploit affect the new M2 chip they just announced? And of course I've seen a lot of, what seemed to be, circle jerking about the chip on HN about how fast it was, but is that in general purpose usage or in specific scientific computation scenarios? Not a Mac guy at all and was just curious.
Aren't they both little endian? ARM used to support big endian and even mixed endian, however I think they dropped all big endian support.
Still, I would think those instructions are generally useful for dealing with peripherals.