also footnote to the illiterate faggot writing this: SoC is system on a chip not "system on chips" roflmao,
and
Intel's x86 silicon chip
its like the only word he knows for cpu tech is chip.
a big deal because Apple uses a bunch of proprietary tech that doesn't play nice with you if you're trying to run an operating system that isn't macOS on one of its computers.
Is this a change for M1? apple has allowed you to install windows for at least a decade lmao
When I looked it up M1 is an ARM chip. I don't think it's necessarily a bad ARM chip by any means, but if we are talking about anything close to a gaming PC it's not going to cut it with software support.
Yeah, he didnt write about what model crapple it was so I just called it the M1, even tho its just the cpu.
the m1 chip is an arm processor with integrated graphics that tops out at 8gb of ram. and it has shared ram with the built in graphics cores.
some nerds have been claiming that it is faster than a 1050, and it rekt tons of other pcs, but none of these sites are listing the specs of those pcs. there was another chart with battery life, and the asus being compared got closest, to within 1 hour of the macs power on time. dell was 3 less hours, and the last mac was 5 less hours.
with those said apple doesnt seem to want to share what frequency its running at, and without the other laptops specc'ed, these tests are pointless, as the mac is already a $1300 machine and that would get you a WAY more powerful laptop just looking at gaming laptops in that price range, I think theyre comparing these to the 'cheap' ultrabook type laptops.
i just checked and 1300 is literally a 3.2 octo and 3060 6tb double the apple's ram and a fuckin tb ssd. the apple page doesnt even tell you what storage it comes with, it only says "up to" 2tb. at $1300 im going to say its a 256. regardless there is NO WAY its faster than a notebook of a similar price. nonce at all. I'm quite surprised how 'cheap' that new laptop i looked up is, technology has readily jumped over the last two years at that price range.
so in all - theyre likely cooking the tests by using lowspec 'travel' pcs, in my o.
Whats the proof of that besides some wallstreet jocky claiming it?
First, Apple has a world-class silicon engineering team, no doubt. That at the end of the day is the real and only reason (besides Intel's delays) why Apple Silicon has been received so well; it is not because the chips run is based on something esoteric called "Arm".
Arm isnt exactly esoteric when its on 90% of normies phones and tablets, and even their chromebooks.
the Apple M1 is highly overhyped.
Agree.
he then claims that the power difference between arm and x86 is a 'myth' by citing a tech study that compared a 768p samsung to an apple iPad 4. While the samsung 'smart pc' gained about 45 minutes of screen on time compared to the iPad 4, the 4 was half the price of the Samsung (~$500) and running at more than double the resolution (1536p, and with the different form factor- still is roughtly 3 times the resolution of the samsung computer). its a shitty non techie argument to say a computer that costs 1 grand, and has 1/3rd the resolution of a cheaper device at half the cost, is equal to arm. presumably the same screen would have a third the battery life of the apple; were it not low res on the samsung.
There are other problems contributing to this, but a 1k device comparing favorably to a $500 one is not a disproval of better power efficiency,
he said intel didnt invest in mobile and then stopped because it was unprofitable. But thats what exactly the problem is. Intels margins for desktops DONT EXIST in the mobile field, so there will NEVER be an equally power efficiency x86 from them there, They wont devote the money to make it, and the price per unit from the average phone will never be what intel is looking for at a per processor basis. afaik some of their phones were given away close to free and they still had no marketshare, from what I remember they had horrible power use even in the same phones that normally had ARM cpu and gpus.
So he doesnt bust the power myth at all. He trumpets the succest of the Core M, by linking to a test of the silicon in a test build by anand, one that wasnt publically sold. they were hyping up its size and performance in a build that was likely costed way higher than a consumer device ever would be, if not running a faster slimmed down OS. THIS review of a corem5y70 build tablet concludes that its a power hog, and achieves poor performance compared to other tablets. This 'arm competitive' processor only put out half the battery life of a Mac Air in a consumer available unit. And it cost as much as 3 iPads.
His arguments dont pass muster for one with even a thin layer of tech knowledge.
"This chip completely swept the best chip Apple had at the time."
Intel's Core M is an astonishing 3x faster than Apple's iPad Air. For comparison, more recently people have called Apple Silicon "revolutionary" because it is a measly 10% or so faster than Intel or AMD's x86 chips.
If that was correct, it wouldnt have been panned when put into a consumer device, the 'Concept' unit was likely over binned and maybe even overclocked as price was no object for intel building one in house.
And this makes it seem like the writer has a vested interest in intels stock.
this should prove that no chip ever has an inherent advantage.
But reading the actual data - it does not prove that. It shows that prerelease chips in concept units can have any design goals the developer wants - when there is not a need to be priced for a consumer to buy one.
I have not explained in more technical detail why it is the case that x86 vs. Arm does not matter, nor have I explained how Intel went from being up to 3x faster than Apple, to being a little bit behind.
He didnt explain because he has a very poor technical knowledge of the articles he's shared in his blog.
he can describe basic technics of what processors do, but I still see that he doesnt get the outcome of the tests he was sharing.
Apple has been more focused on improving the amount of instructions its CPU can execute per clock cycle. In that way, Apple's architecture at 3GHz may be faster than for example an AMD or Intel core at 4GHz. Since power consumption tends to increase quadratically or even cubically with voltage-frequency, this design choice has resulted in Apple's acclaimed performance and power characteristics.
Maybe the instructions per clock has helped the battery life, maybe the lower frequencies, but I dont think its so much Apple revolutionizing anything as the ARM design process itself is based on RISC, whereas most x86 silicon is based on more 'wordy' Complex Instruction Set Computing, the ability for more power requires more overhead on the silicon. That wasnt an invention of Apples, but merely the way the Arm format is orientated.
Afaik apple is just adapting their own designs within the ARM format.
the language merely serves as a representation for underlying concepts. In principle, any concept should be able to be expressed in any language.
I think this writer had a minimum word limit ;).
However, completely overlooked is the powerful advantage of actually controlling one's own architecture. This could in fact be seen as a strong advantage since everyone else in the industry is dependent on others to define their architecture: although the reverse has also occurred a few times,
Half word salad.
Within just a few days, Intel recently gave two examples of how easily it can add new instructions to its x86 architecture to innovate. First, Intel announced its next-gen AMX DLBoost for AI acceleration, which will deliver 4-8x higher performance than the current AVX-512. Secondly, Intel announced no less than 76 new instructions to deliver additional capabilities for the current AVX-512.
I think this guy is heavily bought into intel stocks.
He also disses on Nvidia's approach to AI using its repurposed GPUs.
No, he doesnt.
There are genuinely three different kinds of computers: CPUs, GPUs, and AI. NVIDIA is kind of doing the ‘inbetweener’ thing where they're using a GPU to run AI, and they're trying to enhance it. Some of that is obviously working pretty well, and some of it is obviously fairly complicated. What's interesting, and this happens a lot, is that general-purpose CPUs when they saw the vector performance of GPUs, added vector units. Sometimes that was great, because you only had a little bit of vector computing to do, but if you had a lot, a GPU might be a better solution.
That is hardly a diss.
"Finally, he says that CPU performance nowadays is limited with what I characterized above as instruction set-independent issues (features such as branch prediction, caches, etc.)".
But instruction sets only matter a little bit - you can lose 10%, or 20%, [of performance] because you're missing instructions.
So fixed-length instructions seem really nice when you're building little baby computers, but if you're building a really big computer, to predict or to figure out where all the instructions are, it isn't dominating the die. So it doesn't matter that much.
What limits computer performance today is predictability, and the two big ones are instruction/branch predictability, and data locality.
Now the new predictors are really good at that. They're big - two predictors are way bigger than the adder. That's where you get into the CPU versus GPU (or AI engine) debate. The GPU guys will say ‘look there's no branch predictor because we do everything in parallel’. So the chip has way more adders and subtractors, and that's true if that's the problem you have. But they're crap at running C programs.
How and why Intel stagnated is a sad story (at least for Intel fans and investors, not so much for AMD folks) that falls outside the scope of this article.
Suspicion confirmed, lmao.
People have lauded Apple Silicon almost as if it is the best thing in technology since sliced bread. However, as the image above shows, it isn't. It is simply an evolution of Apple's previous chips: reviews have shown that when comparing one A14 core against one A13 core, at the same clock frequency, the A14 is less than 10% faster.
In devices half the price of the "efficient" x86 chips hes listed. >less than 10% lmao
Neither is it all that much faster than what Intel or AMD has in the market.
.
Apple did not achieve anything extraordinary with the M1
Tru
He links a pic that proves my post from the PCGAMER says M1 is a l33t gaming pc Thread that the M1 was only compared against 'litebooks'. I said a similar priced gaming pc would blow them out of the water and here are current gen ryzen pcs at about double the Cinebench power of the mac1.
In some regard, this may imply that Apple has already fallen behind. Apple will readily need its next-gen M2 (aka A15X) to actually stay competitive.
To conclude, none of Apple's accomplishments are attributable to differences in instruction set, and some are actually attributable to the use of TSMC's 5nm.
These are totally possible.
Additionally, the difference in clock speed (due to differences in target markets) also plays a major role: Intel or AMD are not suddenly going to launch a 3GHz chip into the desktop market. So I would assume both companies will likely continue to make somewhat different trade-offs to achieve their respective targets.
They both currently sell chips above AND under 3ghz right now... Im not sure he knows anything about the current tech market besides which stock picks he's pushing this week..
while it is a bit more difficult to design a chip with both a very high frequency and high performance per clock, it is not impossible. For example, I have read research papers from Intel where they achieved over 2x higher performance per clock compared to its 14nm Skylake simply by scaling up some of its architectural structures.
This stuff reads like some hedge fund manager started using google after buying 500g of intel's stock a week ago.
Additionally, besides research, Jim Keller himself has also said that Intel is working on a "significantly bigger" CPU.
cool. hes totally not a huge int trader
In Apple parlance, Apple supposedly would have a unique advantage since it controls the "full widget": as such it has added other accelerators on its chip. So in that regard (as the argument goes) the M1 is not a CPU [like Intel or AMD as "legacy" companies], but rather it is an SoC.
Thats not apple parlance soc is standard cpu talk and intels mobile chip he was talking about earlier was ALSO a single coc.
*Soc
However, one just has to look at the block diagram of Intel's latest Tiger Lake SoC (image below) to see that Tiger Lake is just as much a multi-accelerator SoC as the M1. This in fact was one of the things that tech analysts praised about Tiger Lake last year:
This guy is just copy pasting press releases and other shit. hes a big trader on int stock.
The PC for over a decade dreamed of a more heterogeneous computing environment while smartphones were executing on it. Maybe Tiger Lake and 11th Gen Core is the beginning;
Corporate bullshit talk. This writer is a grifter for even including this.
One could try to search for even one thing that is in the M1 that isn't in Tiger Lake. I haven't found anything.
There is a CPU, GPU, display, I/O, image processing, AI acceleration, security, media acceleration, audio, Wi-Fi, power delivery and management controller, etc.
Oh wow, a CPU AND A GPU??? You dont say. theres even AUDIO???? how did they make this space age technology.... its never before existed in another processor before this one.
The only thing, I suppose, that one could argue is still missing is a multi-tera-ops dedicated AI accelerator
instead of a neural engine, Intel has opted to leverage its CPU and GPU by integrating what Intel calls DLBoost. As a result, in terms of tera-ops, Intel actually isn't much behind Apple in AI capabilities.
more csuite salad
Intel actually found some real-world AI applications where it completely trounced the M1.
cool we can have a carefully selected test from the manufacturer, just like the first article he linked that didnt match real world performance agian
The point I would emphasize about these benchmarks. Whether they are cherry-picked or not (which they obviously are since they come from Intel)
Funny how he doesnt say this until the 5h to last paragrah in his 35 minute long article.
the M1 hype has been far overblown: the chip may be fast, but it not in a league of its own. Else Intel wouldn't have been able to find benchmarks where it achieves such strong performance against the M1
his own link literally shows non intel processors destroying the m1, he is not a techie, only a stock person, and cant even interpret the data HE is sharing in this article. #mega fail LMAO O
there have been quite a few people who have argued that Apple's Mac sales are booming due to Apple Silicon.
lol. Their desktops have had this arm for less than a year, now?
Although Apple is doing a bit better than the strong PC environment overall, any additional demand due to Apple Silicon can't be isolated and simply does not seem to be the main phenomenon.
this guy is stupidly wordy for bo good purpose.
In any case, for a company that is fundamentally a consumer devices company, developing its chips in-house can be seen as a real advantage. On the competitive side, I argued that Apple has tailwinds from its partnership with TSMC (as first 5nm customer) and Intel's many delays.
So looking forward, I would predict quite fierce competition to continue or to increase - for example as Intel boasts about returning to process and product leadership in a few years. Still, if one may remember one point about Apple's overhyped M1, is that it achieved far from the 2-3x lead which Intel actually had shortly before its delays begun.
so the first gen isnt 2-3x what intel has so its not longterm viable? I completely dont agree, its bad for different reasons than not being that fast in the first gen.
what a longass meandering post this was. really.
an engineering background
How do you know someone is an engineer????
and THERE IS IS
I/we have a beneficial long position in the shares of INTC either through stock ownership, options, or other derivatives.
Boom, i was right all along. shitty article, waste of time low iq market coomer.
also footnote to the illiterate faggot writing this: SoC is system on a chip not "system on chips" roflmao,
and
its like the only word he knows for cpu tech is chip.
Is this a change for M1? apple has allowed you to install windows for at least a decade lmao
pc gamer is garbage, yet again.
When I looked it up M1 is an ARM chip. I don't think it's necessarily a bad ARM chip by any means, but if we are talking about anything close to a gaming PC it's not going to cut it with software support.
Yeah, he didnt write about what model crapple it was so I just called it the M1, even tho its just the cpu.
the m1 chip is an arm processor with integrated graphics that tops out at 8gb of ram. and it has shared ram with the built in graphics cores.
some nerds have been claiming that it is faster than a 1050, and it rekt tons of other pcs, but none of these sites are listing the specs of those pcs. there was another chart with battery life, and the asus being compared got closest, to within 1 hour of the macs power on time. dell was 3 less hours, and the last mac was 5 less hours.
with those said apple doesnt seem to want to share what frequency its running at, and without the other laptops specc'ed, these tests are pointless, as the mac is already a $1300 machine and that would get you a WAY more powerful laptop just looking at gaming laptops in that price range, I think theyre comparing these to the 'cheap' ultrabook type laptops.
i just checked and 1300 is literally a 3.2 octo and 3060 6tb double the apple's ram and a fuckin tb ssd. the apple page doesnt even tell you what storage it comes with, it only says "up to" 2tb. at $1300 im going to say its a 256. regardless there is NO WAY its faster than a notebook of a similar price. nonce at all. I'm quite surprised how 'cheap' that new laptop i looked up is, technology has readily jumped over the last two years at that price range.
so in all - theyre likely cooking the tests by using lowspec 'travel' pcs, in my o.
Interesting article, maybe deserves a solo post, Response incoming.
Whats the proof of that besides some wallstreet jocky claiming it?
Arm isnt exactly esoteric when its on 90% of normies phones and tablets, and even their chromebooks.
Agree.
he then claims that the power difference between arm and x86 is a 'myth' by citing a tech study that compared a 768p samsung to an apple iPad 4. While the samsung 'smart pc' gained about 45 minutes of screen on time compared to the iPad 4, the 4 was half the price of the Samsung (~$500) and running at more than double the resolution (1536p, and with the different form factor- still is roughtly 3 times the resolution of the samsung computer). its a shitty non techie argument to say a computer that costs 1 grand, and has 1/3rd the resolution of a cheaper device at half the cost, is equal to arm. presumably the same screen would have a third the battery life of the apple; were it not low res on the samsung.
There are other problems contributing to this, but a 1k device comparing favorably to a $500 one is not a disproval of better power efficiency,
he said intel didnt invest in mobile and then stopped because it was unprofitable. But thats what exactly the problem is. Intels margins for desktops DONT EXIST in the mobile field, so there will NEVER be an equally power efficiency x86 from them there, They wont devote the money to make it, and the price per unit from the average phone will never be what intel is looking for at a per processor basis. afaik some of their phones were given away close to free and they still had no marketshare, from what I remember they had horrible power use even in the same phones that normally had ARM cpu and gpus.
So he doesnt bust the power myth at all. He trumpets the succest of the Core M, by linking to a test of the silicon in a test build by anand, one that wasnt publically sold. they were hyping up its size and performance in a build that was likely costed way higher than a consumer device ever would be, if not running a faster slimmed down OS. THIS review of a corem5y70 build tablet concludes that its a power hog, and achieves poor performance compared to other tablets. This 'arm competitive' processor only put out half the battery life of a Mac Air in a consumer available unit. And it cost as much as 3 iPads.
His arguments dont pass muster for one with even a thin layer of tech knowledge.
"This chip completely swept the best chip Apple had at the time."
If that was correct, it wouldnt have been panned when put into a consumer device, the 'Concept' unit was likely over binned and maybe even overclocked as price was no object for intel building one in house.
And this makes it seem like the writer has a vested interest in intels stock.
But reading the actual data - it does not prove that. It shows that prerelease chips in concept units can have any design goals the developer wants - when there is not a need to be priced for a consumer to buy one.
He didnt explain because he has a very poor technical knowledge of the articles he's shared in his blog.
he can describe basic technics of what processors do, but I still see that he doesnt get the outcome of the tests he was sharing.
Maybe the instructions per clock has helped the battery life, maybe the lower frequencies, but I dont think its so much Apple revolutionizing anything as the ARM design process itself is based on RISC, whereas most x86 silicon is based on more 'wordy' Complex Instruction Set Computing, the ability for more power requires more overhead on the silicon. That wasnt an invention of Apples, but merely the way the Arm format is orientated.
Afaik apple is just adapting their own designs within the ARM format.
I think this writer had a minimum word limit ;).
Half word salad.
I think this guy is heavily bought into intel stocks.
No, he doesnt.
That is hardly a diss.
Suspicion confirmed, lmao.
In devices half the price of the "efficient" x86 chips hes listed. >less than 10% lmao
.
Tru
He links a pic that proves my post from the PCGAMER says M1 is a l33t gaming pc Thread that the M1 was only compared against 'litebooks'. I said a similar priced gaming pc would blow them out of the water and here are current gen ryzen pcs at about double the Cinebench power of the mac1.
These are totally possible.
They both currently sell chips above AND under 3ghz right now... Im not sure he knows anything about the current tech market besides which stock picks he's pushing this week..
This stuff reads like some hedge fund manager started using google after buying 500g of intel's stock a week ago.
cool. hes totally not a huge int trader
Thats not apple parlance soc is standard cpu talk and intels mobile chip he was talking about earlier was ALSO a single coc.
*Soc
This guy is just copy pasting press releases and other shit. hes a big trader on int stock.
Corporate bullshit talk. This writer is a grifter for even including this.
Oh wow, a CPU AND A GPU??? You dont say. theres even AUDIO???? how did they make this space age technology.... its never before existed in another processor before this one.
No wai
Part II, ran out of characters.
more csuite salad
cool we can have a carefully selected test from the manufacturer, just like the first article he linked that didnt match real world performance agian
Funny how he doesnt say this until the 5h to last paragrah in his 35 minute long article.
his own link literally shows non intel processors destroying the m1, he is not a techie, only a stock person, and cant even interpret the data HE is sharing in this article. #mega fail LMAO O
lol. Their desktops have had this arm for less than a year, now?
this guy is stupidly wordy for bo good purpose.
so the first gen isnt 2-3x what intel has so its not longterm viable? I completely dont agree, its bad for different reasons than not being that fast in the first gen.
what a longass meandering post this was. really.
How do you know someone is an engineer????
and THERE IS IS
Boom, i was right all along. shitty article, waste of time low iq market coomer.
You can. Just replace the “r” with Cyrillic “r”