Hot, Expensive and Blatant Kikery – NVIDIA Announces the GeForce RTX 4090 & “4080”
September 21, 2022
ADA LOVELACE GeForce Nvidia rtx RTX 4000 RTX 4070 RTX 4080 RTX 4080 12GB RTX 4080 16GB

$1600, bullshit “4X performance” marketing, a $900 RTX 4070 and frame interpolation with DLSS 3.0?

The upcoming generation of GPUs is looking to be the stellar, the good being that in terms of performance uplift over the previous generation, ADA Lovelace looks to be a solid improvement, above average, but when it comes to actual consumers this new generation will undoubtedly be the worst when it comes to your wallet balance, product value and corporate scheming.

A new look, a new font, a new NVIDIA? No, not really, NVIDIA are still the same kike corporation they’ve always been with NVIDIA CEO, Jensen Huang making several product announcements during its GTC 2022 presentation.

First and foremost is the “most powerful” that ADA Lovelace has to offer, with NVIDIA announcing the RTX 4090, yours for the measly sum of $1600.

Unless you live in Jewrope, in which case expect the fake MSRP to be €1,949.

At least for the first twelve months, NVIDIA’s ADA Lovelace “flagship” graphics card has quite the specifications, 128 SM’s totaling 16,384 CUDA cores, 24GB of GDDR6X memory, just like the previous generation, just as the RTX 3090Ti, the RTX 4090 offers a 384-bit memory interface with 21Gbps GDDR6X modules totaling 1008 GB/s effective bandwidth.

The full core, AD102 features a maximum of 144 SM’s totaling 18,432 CUDA cores, so as always the supposed “flagship” is merely just another stripped down variant used as a placeholder before the eventual RTX 4090Ti comes into the fray right before the next generation is to be announced.

The RTX 4090 bolsters 76 Billion transistors and 82.6 TFLOPs of single-precision performance, AD102 is a big core, built on a custom TSMC 4nm process the AD102 die itself comes in at 608 mm2, the RTX 4090 demands not only your respect but unsupervised access to your electricity because it’s one hungry GPU.

The RTX 4090 has a TDP value of 450W, but of course NVIDIA loves to downplay the sorts of power outputs that their GPUs have been producing of late, the 450W figure is subject to change but AIB partners are expected to offer custom solutions with ratings as high as 600W.

Now that I’ve glossed over the specifications of the RTX 4090, now we must begin our descent into madness over the RTX 4080 which comes in two flavors, those being a 16GB model and “12GB”.

A shekel has two sides and the GeForce RTX 4080 has two SKUs, cohencidence? I think not.

The NVIDIA GeForce RTX 4080 comes in two different configurations, vastly different configurations, with the RTX 4080 16GB being built utilizing the AD103 core, coming in at $1200 (€1469) it offers 76 SM’s (9728 CUDA cores), which is a regression of over 40% compared to the $1600 “flagship”, 16GB of GDDR6X memory across a 256-bit memory interface, although this time around its memory modules are clocked higher at 22.5Gbps equating to an effective bandwidth of 736 GB/s.

The card itself carries a TDP value of 320 watts, although expect aftermarket manufacturers to be exceeding the 400 watt department for it instead.

The real cause for concern comes in the form of the “12GB variant” of the RTX 4080, with it being built on an entirely different core (AD104), which is typically reserved for the X70 variant of GeForce product, but naturally not even Jensen Huang could make mid-range product such as this go over the crowd for the insane sum of $899 (€1099).

The RTX 4080 12GB or more appropriately called RTX 4070, comes with 60 SM’s, 7680 CUDA cores, HALF the core count of the RTX 4090 and over 20% less cores than its supposed 16GB “twin”.

With 12GB of GDDR6X, 192-bit memory interface it provides 504 GB/s of bandwidth with an 285W TDP. $900 for this pile of shit.

You’ve seen the specs, so what of performance?

All things considered the performance uplift in the only games that arent biased pile of shit is really solid, from the RTX 3090 Ti to the 4090 on Resident Evil 8 shows an improvement of around 60%, Assassin’s Creed Valhalla shows a 50% performance improvement as well and The Division 2 shows around 60% also.

Though in biased garbage such as Warhammer 40K and Microsoft Flight Sim the performance varies around 100% which is nowhere indicative of actual performance.

The RTX 4080 16GB showcases around 10-20% greater performance over the RTX 3090 Ti in the only titles that actually matter, while the 12GB variant or rather RTX 3070 is slightly behind it on genuine pace, probably why Jensen decided to call it an RTX 4080 and charge $900 a couple weeks leading up to the reveal.

But of course, as any company does they try their hardest to make their products seem amazing, such as with the improvements made when it comes to Ray Tracing performance included with DLSS upscaling, in this case the RTX 4090 can therefore be marketed as providing up to 300% greater performance than the previous generation.

With the RTX 4000 series comes a new revolution in peasantry, with DLSS 3.0.

DLSS 3.0 aims to boost performance exponentially higher than that of previous iterations, with marketed “optical flow accelerators” exclusive only to Ada Lovelace, DLSS 3.0 not only upscales gameplay but interpolates frames in real time, of course frame interpolation would undoubtedly boost input latency of which DLSS is notorious for which is why DLSS 3.0 enables NVIDIA’s Reflux anti-latency as standard.

Boosting framerates exponentially, as much as a 4X improvement can be had.

No, I am not making this up, DLSS 3.0 provides immense performance benefits over previous iterations of the technology because it is artificially creating and splicing individual frames, so users can now experience the wonders of motion artifacts combined with added latency and DLSS interpolated frames boosting FPS.

It’s borderline criminal but at the same time from a marketing perspective something of this magnitude was entirely necessary for NVIDIA to downplay AMD’s open sourced FSR which has been much more popular in terms of adoption in such a short amount of time.

DLSS 3.0 will be available in over 35 titles, majority of which are pure shit or obscure trash.

But in typical fashion for RTX and DLSS, the games that it will predominantly be featured in are pure garbage, though I am sure that JewTubers will have a field day comparing performance with RTX + DLSS 3.0 on CyberKike 2077.

Generally speaking, we don’t have a proper gauge on actual performance figures for NVIDIA’s RTX 4000 series, as the RTX 4090 is set to launch on October 12th with the RTX 4080 and non-4070 coming the following month, rest assured that the generational uplift this time around is significant, at least 60% over the previous generation’s RTX 3090Ti.

But as per usual, we’ll have to wait for the paid (((reviewer))) to get their hands on Ada Lovelace before we can get an actual sense of things, though I do imagine that majority of the reviews will focus heavily on the usage of RTX supported titles combined with DLSS 3.0 + RTX.

But even so, the massive price increases this generation are indicative of its performance, a $1600 flagship, a $1200 X80-tier graphics card and $900 for the mid-tier GA104 based X70-tier of GPU, changed at the last minute into the amalgamation of a 12GB “RTX 4080”.

These prices are a fucking scam, but undoubtedly it once again opens the door to AMD to rush on in and steal the thunder with competitive performance (and pricing), though for consumers there’s no real victory when it comes to a duopoly market.

AMD are set to announce their next generation RDNA 3 graphics cards over the coming days, the RX 7000 series aught to be very competitive against NVIDIA’s lineup, not quite enough to topple NVIDIA’s RTX 4090 but certainly close enough to cause significant damage to the egos of NVIDIA’s customers.

There’s no general salvation, considering it had always been rumored that AMD’s flagship would fetch an insane price tag, either $1200 or $1500 for the “7900 XT” which will undoubtedly provide better performance and therefore value than NVIDIA’s RTX 4080 16GB for sure.

Navi 33 being built on TSMC’s 6nm processing node could offer as much as 4096 shaders, equating to the performance around that of a 6800 XT and could potentially be sold for around $400.

Or perhaps we’re just looking at the end of the DIY PC market as the influx of using mining GPUs joins the fray all the while consumers are priced out of next generation hardware entirely.

blog comments powered by Disqus