Whats the proof of that besides some wallstreet jocky claiming it?
First, Apple has a world-class silicon engineering team, no doubt. That at the end of the day is the real and only reason (besides Intel's delays) why Apple Silicon has been received so well; it is not because the chips run is based on something esoteric called "Arm".
Arm isnt exactly esoteric when its on 90% of normies phones and tablets, and even their chromebooks.
the Apple M1 is highly overhyped.
Agree.
he then claims that the power difference between arm and x86 is a 'myth' by citing a tech study that compared a 768p samsung to an apple iPad 4. While the samsung 'smart pc' gained about 45 minutes of screen on time compared to the iPad 4, the 4 was half the price of the Samsung (~$500) and running at more than double the resolution (1536p, and with the different form factor- still is roughtly 3 times the resolution of the samsung computer). its a shitty non techie argument to say a computer that costs 1 grand, and has 1/3rd the resolution of a cheaper device at half the cost, is equal to arm. presumably the same screen would have a third the battery life of the apple; were it not low res on the samsung.
There are other problems contributing to this, but a 1k device comparing favorably to a $500 one is not a disproval of better power efficiency,
he said intel didnt invest in mobile and then stopped because it was unprofitable. But thats what exactly the problem is. Intels margins for desktops DONT EXIST in the mobile field, so there will NEVER be an equally power efficiency x86 from them there, They wont devote the money to make it, and the price per unit from the average phone will never be what intel is looking for at a per processor basis. afaik some of their phones were given away close to free and they still had no marketshare, from what I remember they had horrible power use even in the same phones that normally had ARM cpu and gpus.
So he doesnt bust the power myth at all. He trumpets the succest of the Core M, by linking to a test of the silicon in a test build by anand, one that wasnt publically sold. they were hyping up its size and performance in a build that was likely costed way higher than a consumer device ever would be, if not running a faster slimmed down OS. THIS review of a corem5y70 build tablet concludes that its a power hog, and achieves poor performance compared to other tablets. This 'arm competitive' processor only put out half the battery life of a Mac Air in a consumer available unit. And it cost as much as 3 iPads.
His arguments dont pass muster for one with even a thin layer of tech knowledge.
"This chip completely swept the best chip Apple had at the time."
Intel's Core M is an astonishing 3x faster than Apple's iPad Air. For comparison, more recently people have called Apple Silicon "revolutionary" because it is a measly 10% or so faster than Intel or AMD's x86 chips.
If that was correct, it wouldnt have been panned when put into a consumer device, the 'Concept' unit was likely over binned and maybe even overclocked as price was no object for intel building one in house.
And this makes it seem like the writer has a vested interest in intels stock.
this should prove that no chip ever has an inherent advantage.
But reading the actual data - it does not prove that. It shows that prerelease chips in concept units can have any design goals the developer wants - when there is not a need to be priced for a consumer to buy one.
I have not explained in more technical detail why it is the case that x86 vs. Arm does not matter, nor have I explained how Intel went from being up to 3x faster than Apple, to being a little bit behind.
He didnt explain because he has a very poor technical knowledge of the articles he's shared in his blog.
he can describe basic technics of what processors do, but I still see that he doesnt get the outcome of the tests he was sharing.
Apple has been more focused on improving the amount of instructions its CPU can execute per clock cycle. In that way, Apple's architecture at 3GHz may be faster than for example an AMD or Intel core at 4GHz. Since power consumption tends to increase quadratically or even cubically with voltage-frequency, this design choice has resulted in Apple's acclaimed performance and power characteristics.
Maybe the instructions per clock has helped the battery life, maybe the lower frequencies, but I dont think its so much Apple revolutionizing anything as the ARM design process itself is based on RISC, whereas most x86 silicon is based on more 'wordy' Complex Instruction Set Computing, the ability for more power requires more overhead on the silicon. That wasnt an invention of Apples, but merely the way the Arm format is orientated.
Afaik apple is just adapting their own designs within the ARM format.
the language merely serves as a representation for underlying concepts. In principle, any concept should be able to be expressed in any language.
I think this writer had a minimum word limit ;).
However, completely overlooked is the powerful advantage of actually controlling one's own architecture. This could in fact be seen as a strong advantage since everyone else in the industry is dependent on others to define their architecture: although the reverse has also occurred a few times,
Half word salad.
Within just a few days, Intel recently gave two examples of how easily it can add new instructions to its x86 architecture to innovate. First, Intel announced its next-gen AMX DLBoost for AI acceleration, which will deliver 4-8x higher performance than the current AVX-512. Secondly, Intel announced no less than 76 new instructions to deliver additional capabilities for the current AVX-512.
I think this guy is heavily bought into intel stocks.
He also disses on Nvidia's approach to AI using its repurposed GPUs.
No, he doesnt.
There are genuinely three different kinds of computers: CPUs, GPUs, and AI. NVIDIA is kind of doing the ‘inbetweener’ thing where they're using a GPU to run AI, and they're trying to enhance it. Some of that is obviously working pretty well, and some of it is obviously fairly complicated. What's interesting, and this happens a lot, is that general-purpose CPUs when they saw the vector performance of GPUs, added vector units. Sometimes that was great, because you only had a little bit of vector computing to do, but if you had a lot, a GPU might be a better solution.
That is hardly a diss.
"Finally, he says that CPU performance nowadays is limited with what I characterized above as instruction set-independent issues (features such as branch prediction, caches, etc.)".
But instruction sets only matter a little bit - you can lose 10%, or 20%, [of performance] because you're missing instructions.
So fixed-length instructions seem really nice when you're building little baby computers, but if you're building a really big computer, to predict or to figure out where all the instructions are, it isn't dominating the die. So it doesn't matter that much.
What limits computer performance today is predictability, and the two big ones are instruction/branch predictability, and data locality.
Now the new predictors are really good at that. They're big - two predictors are way bigger than the adder. That's where you get into the CPU versus GPU (or AI engine) debate. The GPU guys will say ‘look there's no branch predictor because we do everything in parallel’. So the chip has way more adders and subtractors, and that's true if that's the problem you have. But they're crap at running C programs.
How and why Intel stagnated is a sad story (at least for Intel fans and investors, not so much for AMD folks) that falls outside the scope of this article.
Suspicion confirmed, lmao.
People have lauded Apple Silicon almost as if it is the best thing in technology since sliced bread. However, as the image above shows, it isn't. It is simply an evolution of Apple's previous chips: reviews have shown that when comparing one A14 core against one A13 core, at the same clock frequency, the A14 is less than 10% faster.
In devices half the price of the "efficient" x86 chips hes listed. >less than 10% lmao
Neither is it all that much faster than what Intel or AMD has in the market.
.
Apple did not achieve anything extraordinary with the M1
Tru
He links a pic that proves my post from the PCGAMER says M1 is a l33t gaming pc Thread that the M1 was only compared against 'litebooks'. I said a similar priced gaming pc would blow them out of the water and here are current gen ryzen pcs at about double the Cinebench power of the mac1.
In some regard, this may imply that Apple has already fallen behind. Apple will readily need its next-gen M2 (aka A15X) to actually stay competitive.
To conclude, none of Apple's accomplishments are attributable to differences in instruction set, and some are actually attributable to the use of TSMC's 5nm.
These are totally possible.
Additionally, the difference in clock speed (due to differences in target markets) also plays a major role: Intel or AMD are not suddenly going to launch a 3GHz chip into the desktop market. So I would assume both companies will likely continue to make somewhat different trade-offs to achieve their respective targets.
They both currently sell chips above AND under 3ghz right now... Im not sure he knows anything about the current tech market besides which stock picks he's pushing this week..
while it is a bit more difficult to design a chip with both a very high frequency and high performance per clock, it is not impossible. For example, I have read research papers from Intel where they achieved over 2x higher performance per clock compared to its 14nm Skylake simply by scaling up some of its architectural structures.
This stuff reads like some hedge fund manager started using google after buying 500g of intel's stock a week ago.
Additionally, besides research, Jim Keller himself has also said that Intel is working on a "significantly bigger" CPU.
cool. hes totally not a huge int trader
In Apple parlance, Apple supposedly would have a unique advantage since it controls the "full widget": as such it has added other accelerators on its chip. So in that regard (as the argument goes) the M1 is not a CPU [like Intel or AMD as "legacy" companies], but rather it is an SoC.
Thats not apple parlance soc is standard cpu talk and intels mobile chip he was talking about earlier was ALSO a single coc.
*Soc
However, one just has to look at the block diagram of Intel's latest Tiger Lake SoC (image below) to see that Tiger Lake is just as much a multi-accelerator SoC as the M1. This in fact was one of the things that tech analysts praised about Tiger Lake last year:
This guy is just copy pasting press releases and other shit. hes a big trader on int stock.
The PC for over a decade dreamed of a more heterogeneous computing environment while smartphones were executing on it. Maybe Tiger Lake and 11th Gen Core is the beginning;
Corporate bullshit talk. This writer is a grifter for even including this.
One could try to search for even one thing that is in the M1 that isn't in Tiger Lake. I haven't found anything.
There is a CPU, GPU, display, I/O, image processing, AI acceleration, security, media acceleration, audio, Wi-Fi, power delivery and management controller, etc.
Oh wow, a CPU AND A GPU??? You dont say. theres even AUDIO???? how did they make this space age technology.... its never before existed in another processor before this one.
The only thing, I suppose, that one could argue is still missing is a multi-tera-ops dedicated AI accelerator
Whats the proof of that besides some wallstreet jocky claiming it?
Arm isnt exactly esoteric when its on 90% of normies phones and tablets, and even their chromebooks.
Agree.
he then claims that the power difference between arm and x86 is a 'myth' by citing a tech study that compared a 768p samsung to an apple iPad 4. While the samsung 'smart pc' gained about 45 minutes of screen on time compared to the iPad 4, the 4 was half the price of the Samsung (~$500) and running at more than double the resolution (1536p, and with the different form factor- still is roughtly 3 times the resolution of the samsung computer). its a shitty non techie argument to say a computer that costs 1 grand, and has 1/3rd the resolution of a cheaper device at half the cost, is equal to arm. presumably the same screen would have a third the battery life of the apple; were it not low res on the samsung.
There are other problems contributing to this, but a 1k device comparing favorably to a $500 one is not a disproval of better power efficiency,
he said intel didnt invest in mobile and then stopped because it was unprofitable. But thats what exactly the problem is. Intels margins for desktops DONT EXIST in the mobile field, so there will NEVER be an equally power efficiency x86 from them there, They wont devote the money to make it, and the price per unit from the average phone will never be what intel is looking for at a per processor basis. afaik some of their phones were given away close to free and they still had no marketshare, from what I remember they had horrible power use even in the same phones that normally had ARM cpu and gpus.
So he doesnt bust the power myth at all. He trumpets the succest of the Core M, by linking to a test of the silicon in a test build by anand, one that wasnt publically sold. they were hyping up its size and performance in a build that was likely costed way higher than a consumer device ever would be, if not running a faster slimmed down OS. THIS review of a corem5y70 build tablet concludes that its a power hog, and achieves poor performance compared to other tablets. This 'arm competitive' processor only put out half the battery life of a Mac Air in a consumer available unit. And it cost as much as 3 iPads.
His arguments dont pass muster for one with even a thin layer of tech knowledge.
"This chip completely swept the best chip Apple had at the time."
If that was correct, it wouldnt have been panned when put into a consumer device, the 'Concept' unit was likely over binned and maybe even overclocked as price was no object for intel building one in house.
And this makes it seem like the writer has a vested interest in intels stock.
But reading the actual data - it does not prove that. It shows that prerelease chips in concept units can have any design goals the developer wants - when there is not a need to be priced for a consumer to buy one.
He didnt explain because he has a very poor technical knowledge of the articles he's shared in his blog.
he can describe basic technics of what processors do, but I still see that he doesnt get the outcome of the tests he was sharing.
Maybe the instructions per clock has helped the battery life, maybe the lower frequencies, but I dont think its so much Apple revolutionizing anything as the ARM design process itself is based on RISC, whereas most x86 silicon is based on more 'wordy' Complex Instruction Set Computing, the ability for more power requires more overhead on the silicon. That wasnt an invention of Apples, but merely the way the Arm format is orientated.
Afaik apple is just adapting their own designs within the ARM format.
I think this writer had a minimum word limit ;).
Half word salad.
I think this guy is heavily bought into intel stocks.
No, he doesnt.
That is hardly a diss.
Suspicion confirmed, lmao.
In devices half the price of the "efficient" x86 chips hes listed. >less than 10% lmao
.
Tru
He links a pic that proves my post from the PCGAMER says M1 is a l33t gaming pc Thread that the M1 was only compared against 'litebooks'. I said a similar priced gaming pc would blow them out of the water and here are current gen ryzen pcs at about double the Cinebench power of the mac1.
These are totally possible.
They both currently sell chips above AND under 3ghz right now... Im not sure he knows anything about the current tech market besides which stock picks he's pushing this week..
This stuff reads like some hedge fund manager started using google after buying 500g of intel's stock a week ago.
cool. hes totally not a huge int trader
Thats not apple parlance soc is standard cpu talk and intels mobile chip he was talking about earlier was ALSO a single coc.
*Soc
This guy is just copy pasting press releases and other shit. hes a big trader on int stock.
Corporate bullshit talk. This writer is a grifter for even including this.
Oh wow, a CPU AND A GPU??? You dont say. theres even AUDIO???? how did they make this space age technology.... its never before existed in another processor before this one.
No wai