Quick News

GPD are switching to AMD in their popular handheld gaming console style(Clamshell/DS like) PC line(Win line, this is the Win Max model), using the same Ryzen V1605B SoC as the Smach Z, it's essentially the embedded version of the R5 2500U from what I can see(4C/8T, 3.6Ghz max boost with Vega8 graphics, 12-25W configurable TDP)

https://www.tomshardware.com/news/amd-intel-gpd-win-max,39127.html

EDIT: Also Ryzen 3200G has reportedly been de-lidded and seems to confirm previous information that they are Zen+/12nm versions of the 2000G chips (https://www.techpowerup.com/254804/amd-ryzen-3-3200g-pictured-and-de-lidded)
 
Last edited:
Did anyone see that Tesla has entered the chip market last night?(Jim Keller and Pete Bannon are behind this possible little masterpiece of a chip) A 230mm^2 piece of silicon manufactured by Samsung that Musk claims to be "The best chip ever built", Tesla famously dropped NVidia a year or so back because they didn't have any hardware in market that could provide Full Self Driving in Tesla's required timeframes. Not only does the new Tesla FSDC seem to significantly outperform the NVidia Xavier hardware originally intended for the job, but NVidia's rebuttal this morning that they *do* have hardware capable really proved Tesla's point, the only thing they have comparable to Tesla's chip(Which has been shipping for around a month) is a device that consumes 4 times the power with only x2 the theoretical max performance and isn't ready for mass market.

Not sure what's more surprising, the fact Musk stuck to the timeline outlined 6 months ago or the fact they've actually come out all guns blazing and put current CPU/GPU manufacturers to shame in terms of perf/watt & safety, really it's the stackk built on top of it that will decide this chips future, but that's looking quite favourable compared to the stack NVidia not long ago infamously "ripped" from an Intel open source project.
 
Last edited:
Did anyone see that Tesla has entered the chip market last night? A 230mm^2 piece of silicon manufactured by Samsung that Musk claims to be "The best chip ever built", Tesla famously dropped NVidia a year or so back because they didn't have any hardware in market that could provide Full Self Driving in Tesla's required timeframes. Not only does the new Tesla FSDC seem to significantly outperform the NVidia Xavier hardware originally intended for the job, but NVidia's rebuttal this morning that they *do* have hardware capable really proved Tesla's point, the only thing they have comparable to Tesla's chip(Which has been shipping for around a month) is a device that consumes 4 times the power with only x2 the theoretical max performance and isn't ready for mass market.

Not sure what's more surprising, the fact Musk stuck to the timeline outlined 6 months ago or the fact they've actually come out all guns blazing and put current CPU/GPU manufacturers to shame in terms of perf/watt & safety.

That is why they hired Jim Keller. Tesla already dropped Nvidia from their MCS. But now they don't need them for the self driving chips either.
 
Haha. The "We are totally shipping 10nm in low volumes" chip. Oh Intel...

This makes 14nm's delay seem like nothing.
Design before etching: 8 core 16 thread 5.2GHz CPU with iGPU.


Chip after binning and etching: 2 core 4 thread 3.2GHz CPU without iGPU.
 
To be fair, Charlie Demerjan over at Semiaccurate has been saying Intel's 10nm was a broken mess since about 2015 and he seems to have hit the nail on the head consistently over these years with its progress. Going from all of his leaks that wccftech article seems quite realistic. Sounds like they essentially had to go back to the drawing board in 2018 to fix the yield issues and more or less create a new less ambitious node entirely for performance parts.
https://semiaccurate.com/?s=10nm

Also of course, back when Intel announced 14nm+ or whatever they basically said in no uncertain terms that initial generations of 10nm would be much slower than mature 14nm. And the fact Coffee Lake was never meant to exist.

“We hear that internally Intel is quite worried about making the launch of Cannon even though it is still about a year away. This is no ordinary early silicon issue, it is a serious and unexpected problem. Coffee lake being added at the last minute between Kaby and Cannon should shed some light on the depths of Intel’s 10nm woes, things are a mess. More when we get it, but for now, not a merry Christmas for those singing from hymnals D1C and D1D.” -SemiAccurate Dec 22, 2016
 
Last edited:
To be fair, Charlie Demerjan over at Semiaccurate has been saying Intel's 10nm was a broken mess since about 2015 and he seems to have hit the nail on the head consistently over these years with its progress. Going from all of his leaks that wccftech article seems quite realistic. Sounds like they essentially had to go back to the drawing board in 2018 to fix the yield issues and more or less create a new less ambitious node entirely for performance parts.
https://semiaccurate.com/?s=10nm

Also of course, back when Intel announced 14nm+ or whatever they basically said in no uncertain terms that initial generations of 10nm would be much slower than mature 14nm. And the fact Coffee Lake was never meant to exist.

I'd go as far as saying that kaby Lake wasn't meant to exist. That was when they switched from Tick Tock to PAO (Process, Architecture, Optimisation).
 
Yeah to be honest even if this leaked roadmap is accurate with Intel over the last few years that doesn't seem to mean much because to be frank they have had no idea what their roadmap will have to look like after the competition moves.

10nm was originally meant to roll out in 2015 going from their roadmaps before then.
 
Last edited:
On the discussion of why loops are problematic in software the other day, if anyone watched Hulkenberg try to hard reset his F1 car at full speed in Shanghai before having to retire, turns out it was caused by a caused by someone forgetting to make sure a loop always reached an exit condition.

https://www.motorsport.com/f1/news/renault-software-fix-code-abiteboul/4376647/

“We have a very simple change in one line of code, and hopefully that will have sorted the problem we had in Shanghai,”

“Basically it’s a default mode that can be triggered. It’s an infinite loop, like sometimes on your laptop, when you see the task manager consuming 98% of the whole CPU. It’s exactly what happened, it’s an open loop and the system was trying to go through that open loop and go through a new process lap after lap, because of the default mode."

(These engines are ~$10 million pieces of hardware, about half the cost of the cars)
 
Last edited:
On the discussion of why loops are problematic in software the other day, if anyone watched Hulkenberg try to hard reset his F1 car at full speed in Shanghai before having to retire, turns out it was caused by a caused by someone forgetting to make sure a loop always reached an exit condition.

https://www.motorsport.com/f1/news/renault-software-fix-code-abiteboul/4376647/



(These engines are ~$10 million pieces of hardware, about half the cost of the cars)

Actually these days or 2019 season, the cars are far more expensive. Even the gearboxes approach $6-7Million now. Tyre sets are now $5000 approx, and considering they go through about 15 sets minimum per weekend. I'm not shocked that even Ferrari considered quitting.

On that note. Bought the new codemaster F1 game. Sweet Jebus it is baYUTEeeful.
 
My friend grew up with Alex Albon(same high school, Alex is 1 year older), was going from his estimations after that tank slapper the other day that destroyed his car, but yeah that was without tyres or anything and there's still a lot of secrecy even to them. He gets paid about £300k a year(Pretty much all of that goes back to team & costs) and we were trying to work out how much of a car his wages would get him(We worked out that's roughly the nose cone at most).

Very glad the next F1 game includes Formula 2 cars, great race here in Baku atm, ofc Alex got the triple here in Baku Formula 2 last year.
 
Last edited:
Using Thermal Grizzly Kryonaut in my laptop did a world of good, meant it could hit its boost clocks constantly, with modern CPUs that could make a pretty big difference for the small cost.
 
Detailed specs leaks for PS5 seemingly from a meeting Sony held on it:

"8 core Zen 2, clocked at 3.2Ghz.

Custom Navi GPU, 56CU, 1.8Ghz, 12.9TF. RT is hardware based, co engineered by AMD and Sony. (They believe the RT hardware is the basis for the rumour that Navi was built for Sony)

24GB RAM (Type or bandwidth wasn't mentioned)

Custom embedded Solid State solution paired with HDD."

Source: https://www.reddit.com/r/PS5/comments/bhabap/well_here_we_go/
 
Saw this earlier. Don't believe the 24GB of ram, but everything else is believable as they are based off current parts or already rumoured specs for Navi.

I say we see 12-18GB of memory, with 2-3GB dedicated to the OS. PS4 pro has 1GB of DDR3 dedicated to the OS, I could see them extending that to the new platform.
 
I'm not sure at this point, if it's targeting 4K then most current titles already target 8GB of VRAM and 16GB of system memory on PC, future titles could easily use around 12GB VRAM for 4K given they often now hit 4GB for 1080p, especially given how memory heavy raytracing gets, while the XboneX is already at 12GB while its devkit was 24GB of GDDR5. Could just be a devkit spec too but 16GB of shared memory wouldn't seem very future proof or like it matches up to the specs(IE could keep the hardware well fed into the future) to me, since we're looking at a bare minimum of x5 jump in CPU & GPU power each with those specs. I just don't see how they could have meaningful raytracing and sub-20GB total system RAM at the same time, and RAM is a lot cheaper than it was around X1/PS4 launch(I mean you get cheap chinese smartphones with 8GB+ of RAM nowadays, while the PS3->PS4 had a x32 jump in RAM size).
 
Last edited:
Back
Top