Home Artists Posts Import Register

Content

Discord is down, and so I am inquiring for reader mail questions through Patreon (see why I am on multiple platforms now?).   His previous episode is here:

https://youtu.be/thz9L8o9pbo


Reply to this message with well thought out questions involving security and server-grade computing.

Comments

Anonymous

As Intel releases more and more security patches every year, it becomes increasingly obvious that the way their branch prediction and speculative execution engine are fundamentally flawed. It also seems unlikely that they could build a new speculative execution engine from the ground up without also inadvertently building in future vulnerabilities. With the revelation that even the hardware we use can be security liability, how do you think we will change the way we approach security going forward.

Anonymous

What differences are there between the branch prediction AMD uses vs Intel and why are AMDs chip not as vulnerable as Intel’s? At what point does the vulnerability’s in intel’s chip reaches a point of critically, as in when does the cost of changing all systems to AMD systems out way adding more intel chips to make up for performance loss? And will intel reach a point where the mitigation’s cause so much performance loss that they are forced to change their architecture majorly and or can this reach the point of a class action lawsuit? Thanks again Tom and the team for doing the great work, I look forward to your videos whenever they come out!

Anonymous

Indeed. My guess is that Intel simply does MORE speculation, and therefor is capable of leaking information faster because of that. Plus, because of Intel's massive market share, their CPUs create a much larger target vector. AMD is able to slowly close the gap between its technology and Intel's (in terms of apple-to-apples sequential performance) with the foresight Intel didn't have. By the time they have enough market share, and do as much speculation as Intel, they'll have closed those vulnerabilities. Intel, in a way, was too successful for it's own good. That success is also what made them overconfident in their fab's abilities, and created the 10nm mess... Intel have big plans, but it was a perfect storm of failures and successes that dug them the hole they're in now. I feel their odds are good in the future, as I see they have more and better long-term "big picture" ideas in the pipeline, but it'll be awhile before we see it. On the consumer desktop side, we might not see it for even longer. Intel is looking for bigger fish now.

Anonymous

How does the cost of electricity to run a server cpu over its lifetime compare to the cost of the cpu itself? I ask because I wonder if it will ever become a consideration for a company: lowering their electric bill by using a more energy efficient setup.

Nils

For servers in large compute nodes, do they use normal networking interconnects like 10G or 50G Ethernet or is there another type of interconnect? Are there any more advanced ones in development? What would better interconnects mean for large scale computing?

Anonymous

From what I understand, Intel's top-end "Agilex" server products are heterogeneous. That is, they combine chiplets and 3D stacking to create mix-and-match products with many different technologies integrated into a single tightly-coupled platform. High-power CPU cores alongside FPGAs, AI accelerators, GPUs, high-speed I/O, stacked memory, etc; How long until these types of systems become common place in the server market? Are we just waiting for more mature development software to make this hardware more accessible to "traditional" programmers (Like Intel's "One API")? Is supply of these products an issue? I hear about AMDs chiplets, "future" stacking, and mix-n-match hardware, but Intel already does these, and the underlying technology seems to be better. Why is nobody talking about that? AMD also doesn't seem to have the IP to create a truly powerful heterogeneous system. Intel, on the other had, has their Altera IP (FPGAs), X-point, eASIC designs, and (soon) GPUs with (I suspect) better usefulness in GP-GPU and highly multithreaded applications. The kind of workloads that scale well with more CPU cores, will likely scale better with a truly GP-GPU-like architecture. AMD may have the raw core count and price advantage for the traditional CPU market, but is that enough going forward? Intel seems to be working on better future-aware hardware, but it's a risk. If the market decides that it's too time consuming and expensive to use right now, AMD might have a long enough time on top to make Intel's efforts irrelevant. AMD might have long enough to roll out their own heterogeneous IP in a slower (and more palatable) incremental fashion to a predominantly AMD industry.

Anonymous

How far is it predicted that intel will be behind AMD with the addition of smt 4?

Anonymous

And on top of that does intel have any plans for more then 2 threads for core?

Anonymous

Same here, any idea when that will be out?

MooresLawIsDead

it's out for premium patrons...now - still prepping all hitchhiker feeds for release tomorrow.