Intel may have killed off their 10nm process

as an intel user from 2004 to 2017 i think this is great. :)

everything that helps AMD is welcome at this point in time.

we need competition.
a few bad years for intel (especially on the server side) will get money into AMD´s R&D department and will help all of us in the end.


i really hope the ryzen 3000 series will be so good that intel can not counter them in 2019.


intel has milked the enthusiasts for too long.


i don´t care about brands, i care about performance per dollar. fanboys are idiots.


well beside performance i care most about system stability.... but my AMD systems are as stable as my intels systems and i even have less issues with drivers on my AMD systems.
 
Last edited:
Wow, that's crazy. If it's true, imagine the stress the executives had that day.

"So, how was your day, hunny?"

"Ohhh, pretty tough, darling."

"Yeah? Wanna talk about it? What happened?"

"We just cancelled 10nm."

"..."
 
I think Intel overshot time frame for Ice Lake's 10nm which was supposed to be DDR4 platform. And now with DDR5 coming in 2020 they decided to ditch it completely, and go for whichever Lake comes after that for DDR5. It was probably better to cut the losses and focus all efforts on DDR5.
 
I think Intel overshot time frame for Ice Lake's 10nm which was supposed to be DDR4 platform. And now with DDR5 coming in 2020 they decided to ditch it completely, and go for whichever Lake comes after that for DDR5. It was probably better to cut the losses and focus all efforts on DDR5.
I would reckon they have put all DDR4 related projects on 14nm(++++?) and work on their own 7nm node for DDR5 platforms to give them some time to tape it out and get the yields up. I would honestly say they will call it 7nm so they don't seem behind TSMC and Samsung who will be trialling 5nm by that point.
 
I would reckon they have put all DDR4 related projects on 14nm(++++?) and work on their own 7nm node for DDR5 platforms to give them some time to tape it out and get the yields up. I would honestly say they will call it 7nm so they don't seem behind TSMC and Samsung who will be trialling 5nm by that point.

It is just a number. 99.99% people that buy them doesn't even know what lithography is. Also there isn't a standardized measurement for the process. So TSMC's 7nm is the same size as Intel's 10 nm. One are measuring the rim, the other are measuring the rim with tire on it.

I dived a lot into lithography process, but I am still far away from truth. As Tom would say "I just pretend to know more than the others." :D

I assume that even 14nm process with using EUV lithography (13.5nm wavelength) you would achieve much better results than current (193nm wavelength) . The main problem now is multiple patterning. Basically you are trying to picture 14nm pattern with 193nm paintbrush.
 
Multi patterning is an incredibly expensive technique originally intended to be a stop-gap solution while EUV ramps up. As EUVs been pushed back, multi-patterning techniques are what have allowed Samsung and co to release sub-28nm products, with exceedingly complicated & expensive numbers of patterns required for Samsungs recent 10nm node that hit consumer devices over the last year, and has been expected to be a big part of Intels 10nm as they were convinced EUV would not be ready in time.

However, TSMC, Samsung, ect have all ramped up EUV production for 7nm as of ~March, to allow them to significantly cut down the sky-rocketing number of "patterns" needed for reliable production, while Intel have been stuck attempting to make increasingly complicated patterning techniques work with 193nm wavelength lithography, which is likely a big part of the reason they havn't made the same progress.

Basically, Intel bet on the wrong horse, and now rather than being a step or two ahead, they're desperately trying to retrace the steps of their competitors.
 
Last edited:
Back
Top