Vega 64 2020 + X58 Review
Kana's FineWine Edition

Comments Disqus

Become a Patron!

Donate with PayPal

Update: 1-9-2021: 
RX Vega 64 2021 Article


Update: 7-21-2020: 
Price and Performance Comparison against RTX 2070\2060S - RX 5700 XT - GTX 1080\1080 Ti



Introduction


I finally got my hands on a decently priced RX Vega 64 LC to replace my R9 Fury X. I already knew that another "FineWine" review article was coming the moment I started playing a few games. It took a while to finish everything, but I have benched the Radeon RX Vega 64 LC against 18 different games at multiple resolutions. This article includes roughly 64 individual benchmarks. Although it was time consuming I believe it was worth it. My review is somewhat unique due to running my 12 years old Intel X58 platform which only has a PCIe 2.0 x16 slot for the Vega 64 in 2020. This review will give people more insight into how well both of them perform together. As usual feel free to leave your comments at the end of the article.


Here Is A Little GPU Historia


Better late than ever seems to be AMD’s mantra and apparently my mantra as well since I recently retired my Radeon Fury X after 5 years. However, that statement doesn’t tell you the entire story about AMD. AMD had roughly a decade of struggles while attempting to compete for market share across several IT sectors. With AMD being one of the smallest out of the “Big 3”, the other two being Intel & Nvidia, AMD doesn’t have the same budget and cash flow as their main competitors. So AMD cannot always follow the same release schedule that their larger competitors follow. AMD has always brought us competitive GPUs for the price per performance across different tiers. Even with great price per performance solutions it was an uphill battle for AMD since Nvidia claimed most of the market share and was the “go to” recommendation across the internet amongst enthusiast. So when AMD released the Radeon R9\Fury series in 2015 they had a lot of the line. AMD wanted the high-end crown and if they couldn’t get it they would at least try. The media and most enthusiast wasn’t so kind to AMD’s high-end Fury X offering for several reasons. I personally believe the Fury X was a solid GPU as I explained in my last article (click here to read my Fury X 2020 review) as well as my initial Fury X review back in 2015. The following year after the Fury Series AMD released their final Fiji based GPU in 2016, the dual GPU Radeon Pro Duo. The Radeon Pro Duo was mostly aimed at the prosumer and retailed for $1,499.99 at release. The problem with that was the price and most people could just get two GTX 980 Ti’s (SLI) and call it a day for gaming purposes. The Radeon Pro Duo & the GTX 980 Ti more or less performed the same in games that supported Crossfire\SLI, but the Radeon Pro Duo wasn’t aimed solely at gamers. The Radeon Pro Duo was more of a benefit for prosumers and professionals compared to the more expensive options from Nvidia. For software outside of gaming the Radeon Pro Duo was a great GPU for workstations.


AMD’s Mainstream Strategy


AMD had to take a step back due to cash flow issues after the Fury and the Radeon Pro Duo Series released. The Fury and the Pro Duo used a new memory interface called “HBM” (High Bandwidth Memory). Releasing a new GPU architecture along with a new memory interface can be very costly and AMD was already having revenue and debt issues. Luckily AMD was still able to give us competitive prices with new technology during their rough years in the GPU market. Following the Radeon R9\Fury Series AMD had a new strategy and that strategy was to target the "mainstream" market (read more here). This market is where the majority of the GPUs are sold. AMD wanted to target consumers in the $100 - $300 area (Polaris RX 400 series – 500 series). This obviously left Nvidia free to reign in the high-end tier section for years and Nvidia did just that with the GTX 980 Ti, GTX TitanX\Xp combo and the GTX 1080. That was until a few new challengers came around Q3 2017 to challenge the GTX 1080. Those challengers were the Radeon RX Vega 56 and RX Vega 64.


Better Late Than Never – Two New Challengers Appear


The Radeon RX Vega 56\64 2017 release was AMD’s return to the high-end & enthusiast markets segments since the Fury X released in June 2015. Unfortunately, following the same cadence as the late Fury X vs the GTX 980, the RX Vega 56\64 release was very late to combat the GTX 1080. Before the Fury X released Nvidia released their GTX 980 “Ti” version 2 weeks earlier to remove the wind from AMD sails. With the Radeon RX Vega 56\64 release, Nvidia released their GTX 1080 “Ti” version 5 months before AMD was able to get their Vega series to the market. Now you see why I stated “Better late than never” earlier. AMD’s biggest issue was money and market releases, but those issues have changed since the company has overtaken Intel with the majority of recent sales in the enthusiast gaming PC market. AMD is also gaining more support from OEMS, prosumer workstations sales and multiple console architectures.

After AMD’s Ryzen CPU released the company has literally made a 180 degrees turn and AMD is in a position to compete more than ever. Overtaking Intel was a huge victory, but now AMD needs to continue and shift their momentum towards the GPU area. Nvidia won’t be the same opponent that Intel was as Nvidia has had years to prepare and usually won’t rest on the same tech as Intel has done. Nvidia also has a great track record of bringing worthwhile performance upgrades year after year for a lot of users. AMD has more or less kept the mainstream & mid-range prices in check, but we are now faced with all time high GPU prices mostly due to lack of competition and mining didn’t help matters either. Gamers have chosen that they have no problems spending $1,200 upwards towards $2,000 for a consumer grade GPU. I personally don’t like this trend, but this is what the consumers have chosen to support.


A New King In My Machine


I have recently retired my Fury X after 5 long awesome years and now my 12 year old platform (Intel X58) has a new king in the machine. I have replaced my Liquid Cooled R9 Fury X with a Liquid Cooled Radeon RX Vega 64. Being that I am still on PCIe 2.0 and a 2008 Intel X58 platform in 2020 it’s going to be interesting to see how well my build performs with this much newer Vega 64. So you can say that this review is somewhat “unique” in its own way. I have a thing for legacy tech and we will see how well the old X58 and the 3 year old Vega 64 perform in 2020. If you haven’t checked out my Fury X - 5 year review in 2020 you should check that out as well for a quick comparison.




What is AMD's “Vega” Archetecture?


“Vega” is AMD’s GPU architecture that expands on their previous GCN architecture. Unlike previous implementations Vega makes some drastic changes to GCN. Vega marks AMD’s 5th entry into the GCN architecture. AMD relabeled their CU (Compute Units) to NCU (Next-Generation Compute Units). Similar to Fiji’s 64 CU, AMD’s Vega has 64 CU as well; however the performance has been greatly improved from an architectural level. By the way the brand names "Vega 64" and "Vega 56" are based on the amount or ROPs available to each GPU. Fiji based GPUs (Radeon R9\Fury Series) max shader performance was 8.5 TFLOPS while the Vega architecture reaches 13.7-TFLOPS. This jump in performance will vary depending on the type of Vega architecture that is used; for instance the Liquid Cooled version or air cooled "Vega 64" will hit 13.7-TFLOPS and the Air Cooled "Vega 56" version will hit 12.66-TFLOPs. This is due to efficiency, ROPs count, heat concerns and power constraints. AMD RTG (Radeon Technology Group) focused much more on power efficiency along with low latency with Vega. This is very important in the enterprise area.

Vega is built on the 14nm FinFET LPP for lower power usage. Unlike Fiji (28nm), Vega takes advantage of “Rapid Packed Math” which allows a theoretical throughput of 27.4-TFLOPS. Rapid Packed Math allows Vega to handle FP16-bits packed which doubles the throughput (13.7x2 = 27.4). It isn’t limited to just 16bits, but can also take advantage of 8-bits as well and of course 32-bits. AMD implements something named “IWD” - Intelligent Workload Distributor which aids in preventing the pipeline from stalling due to things such as context switching or smaller draw calls that might not completely fill the pipe. Speaking of the pipeline, AMD engineers went the extra mile to ensure that Vega became their highest clocked Radeon GPU (1,677Mhz) to date when it released in 2017 (Fiji was only 1,050Mhz and had limited overclocking potential). So this means increased pipelining, but this could also lead to detrimental results if pipeline bubbles or stalls appear during computation. AMD’s RTG engineers were able to keep the ALU working with only four stages while not impacting performance from all of the other “new” and dramatic changes to the Vega\GCN 5th Generation architecture.


Display and Image Performance


As far as displays goes Vega supports up to 8K @ 60hz using 32 bits and everything in-between. Using 64-bit HDR allows much higher refresh rates at higher rates such as 4K @ 120Hz, but can also go up to 8K @ 60hz as well. Display Port 4.1 and HDMI 2.0 is supported. Vega supports several different display modes, but the biggest are the x6 simultaneous - 4K\60Hz output, x2 - 4K\120hz, x3 – 4K\60hz(64 bit HDR), x3 - 5K\60Hz. Another technology known as Draw-Stream Binning Rasterizer (DSBR) was another selling point. DSBR is a “tiled” type of rendering meaning that it divides the image into tiles and adds them to batches, afterwards the GPU processes the data\batches one bin at a time. Apparently this takes only one clock in the pipeline per bin. DSBR has some controversy surrounding it in the gaming scene, but apparently it works on a “per game” bases and requires developers to support it in addition to AMD drivers updates. Then again I've read that DSBR just “simply works”, but who knows at this point. You’ll have to contact AMD to get more information or more direct answers surrounding the current state of the tech. It has been 3 years since the RX Vega 56\64 release so I'm sure most people have moved on and stopped waiting for more info from AMD on this topic. It was very nice feature that many people were looking forward to and left some users wondering about AMD marketing and false promises. It was not a good look for AMD, but some concluded that it was mostly for apps outside of gaming and catered towards the professional market.

There are tons of other architectural features such as HBCC (High-Bandwidth Cache Controller) which works well with AMD workstation GPUs such as the Radeon Pro SSG, but those were aimed mostly at the enterprise markets and prosumers. HBCC basically allows the GPU access to contiguous memory more effectively instead of relying on much slower dynamic system memory. Obviously this type of technology would need to be implemented on a program or video game bases along with the developers programming for the tech. You can think of it more as a “future” technology for gaming since most games won’t need more than 8GBs today. Gaming consoles could certainly benefit from this type of technology and I wouldn’t be surprised if we see this type of tech in the Playstation 5 or Xbox Series X. Obviously the professional market could make great use of HBCC. As far as power saving features goes, I will explain more of these features in the Power Consumption section of this article.

So basically Vega is AMD’s RTG (Radeon Technology Group) drastic change in direction from the traditional GCN architecture that they have been improving since approx. 2012 to 2016. The server, AI, enterprise and professional environment is where most of these technologies will shine, but gaming could also benefit from this type of tech if developers decide to invest in it. This technology marks an important step in AMD’s GCN technology moving forward and will allow AMD to expand rapidly and effectively with future updates to the architecture. Fortunately the future is here since I’m writing this Vega 64 review literally 3 years after its release in 2020. The future is the present and that is AMD’s new “RDNA” Architecture. There are many more decisions that AMD’s RTG decided to change with the Vega series, but this article is more about Vega 64 gaming performance and efficiency. I just thought I’d give this GPU a decent representation of the changes from the previous Fiji GPU (R9 Fury X) I ran for years. (which I reviewed and you can read by click here)