I guess I'm an old fart because I just assumed all this security type stuff was mostly done at the software level. But my machine language knowledge is much more like Z80/68k level or old 386. Although I'd probably still prefer the CPU just execute what it's told to without trying to check on the code anyway, because otherwise it's basically just TPM anyway.
I'm sure it's just general incompetence that's prevalent in the computer industry. I've never researched it, but Apple would be stupid to try to redesign an ARM chip from the ground up. It's probably more of an arrangement like the Xenon PPC chip in the Xbox 360 where Microsoft wanted to own the design so they could alter/scale it for cost in the future.
Apple designed the M1 chip using over $1 billon dollars of alleged R&D over 11 years, but the main feature is speed, and the second feature are special hardware instructions to emulate a intel x86 3 times faster than using pure software emulation, mainly endien byte order swap and such.
Apple wrote fancy LIBRARIES to aid emulation engineers port linux and video games :
The M1 in some configurations for 10,000 dollars is of course the fastest computer you can buy for science computation. [Apple Mac Studio with M1 Ultra]
New CPU benchmarks show that Apple’s powerful M1 Ultra chip outperforms Intel’s 12th-generation Alder Lake Core i9 12900K chip in multi-core performance, and AMD Ryzen 5950x, in both multi-core and single-core performance.
Since you seem to know a lot more about this stuff than I do: does this exploit affect the new M2 chip they just announced? And of course I've seen a lot of, what seemed to be, circle jerking about the chip on HN about how fast it was, but is that in general purpose usage or in specific scientific computation scenarios? Not a Mac guy at all and was just curious.
I guess I'm an old fart because I just assumed all this security type stuff was mostly done at the software level. But my machine language knowledge is much more like Z80/68k level or old 386. Although I'd probably still prefer the CPU just execute what it's told to without trying to check on the code anyway, because otherwise it's basically just TPM anyway.
I'm sure it's just general incompetence that's prevalent in the computer industry. I've never researched it, but Apple would be stupid to try to redesign an ARM chip from the ground up. It's probably more of an arrangement like the Xenon PPC chip in the Xbox 360 where Microsoft wanted to own the design so they could alter/scale it for cost in the future.
Apple designed the M1 chip using over $1 billon dollars of alleged R&D over 11 years, but the main feature is speed, and the second feature are special hardware instructions to emulate a intel x86 3 times faster than using pure software emulation, mainly endien byte order swap and such.
Apple wrote fancy LIBRARIES to aid emulation engineers port linux and video games :
https://developer.apple.com/documentation/apple-silicon/about-the-rosetta-translation-environment
https://news.ycombinator.com/item?id=23613995
The M1 in some configurations for 10,000 dollars is of course the fastest computer you can buy for science computation. [Apple Mac Studio with M1 Ultra]
https://appleinsider.com/inside/mac-studio/vs/compared-mac-studio-with-m1-max-versus-mac-studio-with-m1-ultra
M1 Ultra benchmarks versus Intel 12900K, AMD 5950x and Xeon W: https://www.ithinkdiff.com/m1-ultra-benchmarks-intel-12900k-amd-5950x-xeon/
Since you seem to know a lot more about this stuff than I do: does this exploit affect the new M2 chip they just announced? And of course I've seen a lot of, what seemed to be, circle jerking about the chip on HN about how fast it was, but is that in general purpose usage or in specific scientific computation scenarios? Not a Mac guy at all and was just curious.