Home Artists Posts Import Register

Content

My next guest will be Dr. Ian Cutress, senior editor at AnandTech.  He has over a decade of experience covering Nvidia, AMD, and Intel's products - and unlike most "tech journalists" these days, Ian actually tries to fairly cover Intel during a dark period for the company.


Expect this to be a conversation heavily focusing on Intel Rocket Lake, Ice Lake Server, and other recent and upcoming products from Team Blue.  But of course I am sure he would be happy to discuss almost anything related to AMD, Nvidia, or semiconductors in general.   Your questions/comments don't need to be about Intel.


Be concise, use good grammar, and above all else - be thoughtful in your comments below.  We will try to get to as many of the best ones as we can!


NOTE: You have ~30 hours to submit questions (till end of Wednesday US Central)


Articles by Ian: https://www.anandtech.com/author/140

Ian's YouTube Channel: https://www.youtube.com/channel/UC1r0DG-KEPyqOeW6o79PByw

linkedin: https://www.linkedin.com/in/iancutress/?originalSubdomain=uk


Comments

Anonymous

Tech, Tech Pohtay-toh or Poh-tahtoh? Important question!

Anonymous

Hi Ian. I've been an avid reader on AnandTech for over a decade now and your writing (along with Anand and Brian Klug's) always got me hyped about computer tech in general. How do you think tech journalism plays a role in shaping the future of the industry in terms of people like me who studied computer engineering as well? Finally, what should Intel be doing to stave off the encroaching ARM juggernaut?

Xerox PARC

Hi Dr. Cutress, what do you think the chances are of a non-Apple Arm workstation sitting on or under your desk in the next 5 years? Will anyone besides Apple be able to get high performance desktop (or mobile) silicon to the general market in that timeframe?

Kevin Wise

Hey Tom and Tech Tech Potato, what are your thoughts on the upcoming rivalry between Alder Lake and Zen 3? How much performance uplift do you expect from Intel's switch to 10nm and updated architecture to make in their battle to no longer be known as the budget option?

Anonymous

Hello Dr. Cutress and Tom, my question is about his latest article (time of writing) on AnandTech covering the latest AMD 5000G, R9 5900 and R7 5800 launches. Were you surprised that the non X variant of the 5800 is OEM only? Do you think it will come to the DIY market? I was expecting (maybe hoping is a better word) that the R7 5800 or 5700X would be a cheaper option at the DIY segment, I guess AMD didn't see the need to release it after Rocket Lake. Maybe Dr. Cutress could also answer his own question on the article, got really curious to know his opinion! "Which would you rather have - 100-200 MHz extra CPU frequency, double the L3 cache, and PCIe 4.0, or would you rather have integrated graphics?". I was a bit dissapointed that the graphics on this lineup didn't get a real upgrade other than frequency, but with more decent performance and DDR5 coming, along with rising gpu prices, integrated graphics might become a real option sooner than later. Thank you!

Beech Horn

AMD boasted the performance gains due to branch prediction improvements with Zen 3, but remained tight lipped regarding Spectre/Meltdown style attacks this may open up even when directly asked (https://twitter.com/thracks/status/1324776398368768000?s=21). Now we have had recent examples of Spectre style attacks against Zen 3 is it likely we’ll have a similar situation as with Intel of late where continual remediation incurs performance penalties each time?

Samuel

Hi Ian, loved your reviewception video with Hardware Unboxed. You two are the first places I go anytime new hardware launches so it was cool seeing y’all interact. Anyways, AMD has been talking about their vision for the future these last few years in what they call Heterogeneous compute. I believe the end goal is to develop a fully universal interconnect that works with all types of chips. Then you can just have chiplets for CPU, GPU, FPGA, Compute, Machine Learning, Neural net Etc. and combine them however that particular customer wants. Basically they’re making a McDonalds build-your-own-combo menu for computer chips. 😂 My question is, what are the challenges in designed such an interconnect and is it even possible with current technology or will we have to wait for 3D stacking? AMD has talked publicly about how much more difficult it is to do with GPU chiplets compared to CPU chiplets for example. Their recent patent filings on the subject have been intriguing but I’m not seeing how that technology would necessarily extend to other chiplet types as well. P.S. So glad I stopped eating apples recently. 😜

Anonymous

Hi everyone, what do you expect for the next generation of sockets (Intel's LGA 1700 and AMD's AM5). How many years or how many CPU generations will they support, especially in connection with the upcomming DDR5 memory. Will there be a trend of extending their life cycles like AMD did with AM4 ?

Anonymous

Hi Dr Cutress and Tom. I been thinking about a discussion from an earlier podcast with I think Daniel Nenny. Is it so that Intel 10 nanometer is the same class of node as 7nm tsmc. And when Intel have 7 nm is that similar to tsmc 5 nm? If the above is correct can you please give some explanation around it ? :-)

Anonymous

Hi Dr Cutress, Tom, there’s always a lot of discussion around hardware. However software seems to be at least equally important today. Can you talk about Intel’s oneAPI strategy, give a brief dumbed down explanation of what it is for us non-CS engineers? Given Intel’s XPU strategy, how does oneAPI fit into system on package, mixing and matching tiles of different architecture (cpu gpu fpgas etc) and how it may benefit the end user from a use and performance perspective?

Deepest Learners

In the coming era of CXL and Gen Z interconnects redefining data center infrastructure architecture, do you think on-package heterogeneous computing will be more or less important? A common vision for this next era is having compute disaggregated and pooled over servers and racks, with lots of memory in one rack, accelerators in another, NVMeoF elsewhere, and CPUs in their own. In this scenario, where does the kind of on-package heterogeneity advanced by AMD and increasingly Intel fit in?

Deepest Learners

We’ve seen reports (including from Tom) that there will be HBM on-package for Sapphire Rapids and maybe Zen 4. Why hasn’t either AMD or Intel put memory on-package in their laptop SoCs rather than relying on DIMM slots? Putting a couple of stacks of some kind of DRAM on package and eliminating DIMM slots would simplify motherboard design and increase the value of their products. What am I missing?

Anonymous

Hello ian and Tom, I love both of your content, it’s incredibly informative and entertaining. Well anyway my question is: do you think that with TSMC running away with process technology leadership, wouldn’t it make sense for other fundries to partner up to bring to market a node that can compete with TSMC latest and then go their separate ways? Something that later they could build upon individually afterwards but could help stop TSMC to just turn into a monopoly, wouldn’t the idea of the enemy of my enemy is my friend be a valid strategy in this situation?

Cleansweep

Do you think that governments trying to bring new fabs online/get existing fabs on better nodes will have any positive effects for consumers, or will we see most/all of their possible output be funneled into fulfilling government contracts?

Anonymous

Hello Tom and Dr. Cuttress! Something I'm wondering about is if we will see new iterations of chips from either Intel or AMD that will be around 6 to 10 watts and support thin, fanless, and clamshell or 2-in-1 form factors? I'd like to see inexpensive computing devices that aren't necessarily tablets but are very portable, have long battery lives, and I guess are basically more useful netbooks.

Mia

Dr. Cuttress—your reviews have been my go-to for CPUs for a while, and I really appreciate your in depth looks at aspects of CPUs on the level of cache latencies etc. Do you have a sense for how that type of analysis is going to need to change as compute becomes more and more heterogeneous? It seems most of the compute device design companies have completely bought into advanced packaging and "semi-custom" as the future, at least at scale. By the way, very much appreciate that you measure _the_ most important benchmark: Dwarf Fortress.

Anonymous

Hello, Dr. Ian Cuttress, I've been speculating about why we haven't seen an 450mm wafer. My guess is that (1) it's hard to produce and fragile due to its size (2) now with everyone adopting chiplets (making smaller chips), we can get the same benefit as we can get from a larger wafer (less wasted area). What are your thoughts on this?

qhfreddy

Ian, you are one of the more seasoned reviewers in the space, but is there something about it you still struggle with or find challenging in doing reviews? How has that manifested itself in recent times?

Anonymous

Hello Dr Wafer Muncher, oh and homeless Tom :D As a university Engineering student looking to go into EE/ Semiconductors, what advice can you give on what I can do to further my expertise/experience in this field. This is outside of applying to internships and catching up on the terms worth of content I have over the next month right before exams :D Cheers guys and keep up the great work!

Anonymous

Hi Ian, How do you see NVIDIAs new CPUs competing against server Intel and AMD offerings 5-10 years down the road from now? How hard will it be for current AI libraries to adopt NVIDIA CPUs?

QuickJumper

Hey Ian maybe more bit from financial perspective. Is AMD strong enough to provide long-term competition. Or will Intel just kill AMD once again with it's financial strength.

Anonymous

Hey Dr Cutress! Thanks so much for coming on the show! Intel made a particularly high profile purchase of MobileEye in 2017(an israeli company which specializes in autonomous driving technology). Do you think there is a chance that intel is trying to shift their focus to autonomous driving? Do you think this may further decrease their willingness/bandwidth to innovate in the consumer CPU space, or even prompt them to leave the consumer/enterprise CPU space entirely? Sorry if this question is tangential to the current discussion! Thanks again!

Anonymous

Hello Dr. Cuttress, Why is Intel so obsessed with AVX (especially AVX-512)? And it seems like AMD is trying to implement that as well. Why wouldn't they just offload this kind of SIMD FP operation work onto GPUs? Is it bc some server companies still haven't move to GPU computing (like Forrest Norrod mentioned in your interview)?

Anonymous

Like Tom said in his APU video, AMD/Intel are designing a whole stack of product not just for the top sku.

Anonymous

Good Day Gentleman! Here's a question I am struggling with currently: for cheap budget gaming builds, playing only esport titles, which is the better pick: The i5 10400F or the R5 3600? Also considering the choice of which platform to pick and resulting overclocking and memory support you'd get, and it's performance impact. Is there an argument upgrading to the 10600K/Z490 Combo or a 11400F/B560 Combo for higher RAM Speeds and CPU-performance while not throwing price to performance out of the window. All this considering you'd actually have the ability and time to super tune CPU and RAM.