Now the Good Stuff
The RTX 4090 appears to be the best GPU for content creators and professional. Last generation the 3080 performed well, but it's clear that the RTX 4090 will be the GPU to buy for professional usage. Nvidia has upgraded their unique and very nice looking Founders Edition design for the 4000 series. The fans will be slightly larger for better airflow and the power supply has been upgraded to 23-phases. The 4000 series will use the 16-pin connector that allows you to use up to four 8-pin power connectors. Only three 8-pin power connections are required, the fourth 8-pin connection will allow for higher overclocks. This new 16-pin connection will support the ATX 3.0 power supplies that are coming to the market in a few weeks. The micro-architecture based on AD10x will have some very nice features for content creators, streamers and professionals. Nvidia's superb Encoder (NVENC) will be getting an upgrade and will include dual AV1 Encoders. This will be important for streamers because Nvidia is claiming up to a 40% streaming efficiency increase over the current RTX 3000 series. Software will be upgraded to take advantage of the new dual AV1 encoders that will ensure FPS stays high and streams continue look great & run smoothly. The best thing is that the picture quality should get even better at the same or lower birates than we are using today. This will allow optimized\smaller file sizes and less bandwidth being used. Gamers will be able to record gameplay up to 8K at 60FPS. For content creators, hobbyist and professionals the dual AV1 encoders\decoders should allow workloads to complete two times faster than the RTX 3080. The RTX 3000 series is capable of AV1 Decoding, but not AV1 Encoding. The RTX 4090\80 will enable both. This was prviously only seen in the enterprise grade GPUs from Nvidia. CUDA will also be getting upgraded from version 8.6 to 8.9 as well. DLSS 3 is only available for RTX 4000 users. DLSS 3 & Optical Flow will now allow the GPU to render actual frames instead of pixels to the final output that we see on-screen. This also could mean that we will see less denoising in future titles and sharper results. DLSS 3 will also increase the framerates and frametimes which means that even if you are CPU bottlenecked the GPU will do most of the heavy lifting. Another thing to note is that Nvidia will still be using the old Display Port 1.4a. This definitely matters for certain professional workloads. For the record Nvidia has been using the old Display 1.4a standard since 2016 (GTX 1080 officially used DP 1.2, but could use DP 1.4 standard). For the average gamer this probably won't matter as much since they are usually only chasing framerates and frametimes. It is also possible that Nvidia will update the Display Port standard on the enterprise GPU release based on Ada Lovelace, but there are many more variables that are more important than the Display Port standard when it comes to professional workloads. Regardless of what Nvidia does at the enterprise level the DP 1.4a is still capable of some very impressive resolutions and framerates (8K @ 60FPS / 4K @ 120FPS). The HDMI will continue to use the 2.1a standard that the RTX 3000 series used which offers 8K @ 60FPS and 4K @ 120FPS as well.
Conclusion
Nvidia is placing the RTX 4090 in a league of its own in regards to specification, price and performance. As I stated earlier in the article, Nvidia does not want the RTX x80 anywhere near the x90 GPU this time around. It’s clear that the RTX 4090 is aimed at prosumers, professionals, content creators and those who need more than trivial gaming performance. The Titan is a thing of the past and the RTX x90 is filling that gap. Nvidia has priced it accordingly for those who don’t want to buy a much more expensive enterprise Nvidia GPU for professional use. Nvidia is claiming that the RTX 4090 is said to be two times faster than the RTX 3090 Ti and that the RTX 4080 is twice as fast as the RTX 3080 Ti. Nvidia set the stage with the RTX 3080\3080 Ti & 3090\3090 Ti and are now increasing the price of the upcoming RTX 4080 16GB\12GB. Nvidia appears to be positioning their tiers and their future tech in a window that they have carefully planned over the past several years. The only way to upset this plan is for AMD to do the unthinkable and match Nvidia with competitive prices and performance results. However, this is highly unlikely. AMD is in a prime position to undercut Nvidia, but I don’t think AMD will be able to do it based on the current economy. Nvidia must feel very comfortable with anything that AMD will release based on their current pricing. Nvidia has also given themselves a nice cushion for the inevitable RTX 4080 “Ti“ release regardless of what AMD does. Hopefully AMD has an answer for the incoming RTX 4000 series impressive performance increase. Eventhough the RTX 4080 16GB has scaled back some of the memory configuration hardware, it will perform very well against the current RTX 3080 and the RTX 3080 Ti. Other GPU hardware has been increased basd on the info that I have seen (SMs, ROPs, RT Cores, L3 Cache etc), but current RTX 3080 users will definitely be taking a 20% decrease in memory bus width and a 3% decrease in memory bandwidth. This was obviously done to keep the tiers in check. It is worth notice that the RTX 4080 16GB memory is clocked 21% higher than the (stock) RTX 3080. Now when it comes to the RTX 4080 12GB it appears that Nvidia did the opposite and you’ll actually get less of everything compared to the RTX 3080 based on the information that I have seen so far (SMs, ROPs, RT Cores, L3 Cache etc), but the RTX 4080 12GB should actually outperform the RTX 3080 easily thanks to the updated micro-architecture and L2 cache increase. The memory bus width is 40% higher on the 3080 and the memory bandwidth is 34% higher as well when compared to the RTX 4080 12GB. Depending on the workloads current RTX 3080 users will need to decide if losing memory bandwidth is worth it. It is worth noting that the RTX 4080 12GB memory is clocked higher than the 3080. So while the RTX 4080 12GB will have lower memory bandwidth it should perform well against the RTX 3080.
- << Prev
- Next