Vega 64 2020 + X58 Review
Kana's FineWine Edition

Comments Disqus

Become a Patron!

Donate with PayPal

Update: 1-9-2021: 
RX Vega 64 2021 Article


Update: 7-21-2020: 
Price and Performance Comparison against RTX 2070\2060S - RX 5700 XT - GTX 1080\1080 Ti



Introduction


I finally got my hands on a decently priced RX Vega 64 LC to replace my R9 Fury X. I already knew that another "FineWine" review article was coming the moment I started playing a few games. It took a while to finish everything, but I have benched the Radeon RX Vega 64 LC against 18 different games at multiple resolutions. This article includes roughly 64 individual benchmarks. Although it was time consuming I believe it was worth it. My review is somewhat unique due to running my 12 years old Intel X58 platform which only has a PCIe 2.0 x16 slot for the Vega 64 in 2020. This review will give people more insight into how well both of them perform together. As usual feel free to leave your comments at the end of the article.


Here Is A Little GPU Historia


Better late than ever seems to be AMD’s mantra and apparently my mantra as well since I recently retired my Radeon Fury X after 5 years. However, that statement doesn’t tell you the entire story about AMD. AMD had roughly a decade of struggles while attempting to compete for market share across several IT sectors. With AMD being one of the smallest out of the “Big 3”, the other two being Intel & Nvidia, AMD doesn’t have the same budget and cash flow as their main competitors. So AMD cannot always follow the same release schedule that their larger competitors follow. AMD has always brought us competitive GPUs for the price per performance across different tiers. Even with great price per performance solutions it was an uphill battle for AMD since Nvidia claimed most of the market share and was the “go to” recommendation across the internet amongst enthusiast. So when AMD released the Radeon R9\Fury series in 2015 they had a lot of the line. AMD wanted the high-end crown and if they couldn’t get it they would at least try. The media and most enthusiast wasn’t so kind to AMD’s high-end Fury X offering for several reasons. I personally believe the Fury X was a solid GPU as I explained in my last article (click here to read my Fury X 2020 review) as well as my initial Fury X review back in 2015. The following year after the Fury Series AMD released their final Fiji based GPU in 2016, the dual GPU Radeon Pro Duo. The Radeon Pro Duo was mostly aimed at the prosumer and retailed for $1,499.99 at release. The problem with that was the price and most people could just get two GTX 980 Ti’s (SLI) and call it a day for gaming purposes. The Radeon Pro Duo & the GTX 980 Ti more or less performed the same in games that supported Crossfire\SLI, but the Radeon Pro Duo wasn’t aimed solely at gamers. The Radeon Pro Duo was more of a benefit for prosumers and professionals compared to the more expensive options from Nvidia. For software outside of gaming the Radeon Pro Duo was a great GPU for workstations.


AMD’s Mainstream Strategy


AMD had to take a step back due to cash flow issues after the Fury and the Radeon Pro Duo Series released. The Fury and the Pro Duo used a new memory interface called “HBM” (High Bandwidth Memory). Releasing a new GPU architecture along with a new memory interface can be very costly and AMD was already having revenue and debt issues. Luckily AMD was still able to give us competitive prices with new technology during their rough years in the GPU market. Following the Radeon R9\Fury Series AMD had a new strategy and that strategy was to target the "mainstream" market (read more here). This market is where the majority of the GPUs are sold. AMD wanted to target consumers in the $100 - $300 area (Polaris RX 400 series – 500 series). This obviously left Nvidia free to reign in the high-end tier section for years and Nvidia did just that with the GTX 980 Ti, GTX TitanX\Xp combo and the GTX 1080. That was until a few new challengers came around Q3 2017 to challenge the GTX 1080. Those challengers were the Radeon RX Vega 56 and RX Vega 64.


Better Late Than Never – Two New Challengers Appear


The Radeon RX Vega 56\64 2017 release was AMD’s return to the high-end & enthusiast markets segments since the Fury X released in June 2015. Unfortunately, following the same cadence as the late Fury X vs the GTX 980, the RX Vega 56\64 release was very late to combat the GTX 1080. Before the Fury X released Nvidia released their GTX 980 “Ti” version 2 weeks earlier to remove the wind from AMD sails. With the Radeon RX Vega 56\64 release, Nvidia released their GTX 1080 “Ti” version 5 months before AMD was able to get their Vega series to the market. Now you see why I stated “Better late than never” earlier. AMD’s biggest issue was money and market releases, but those issues have changed since the company has overtaken Intel with the majority of recent sales in the enthusiast gaming PC market. AMD is also gaining more support from OEMS, prosumer workstations sales and multiple console architectures.

After AMD’s Ryzen CPU released the company has literally made a 180 degrees turn and AMD is in a position to compete more than ever. Overtaking Intel was a huge victory, but now AMD needs to continue and shift their momentum towards the GPU area. Nvidia won’t be the same opponent that Intel was as Nvidia has had years to prepare and usually won’t rest on the same tech as Intel has done. Nvidia also has a great track record of bringing worthwhile performance upgrades year after year for a lot of users. AMD has more or less kept the mainstream & mid-range prices in check, but we are now faced with all time high GPU prices mostly due to lack of competition and mining didn’t help matters either. Gamers have chosen that they have no problems spending $1,200 upwards towards $2,000 for a consumer grade GPU. I personally don’t like this trend, but this is what the consumers have chosen to support.


A New King In My Machine


I have recently retired my Fury X after 5 long awesome years and now my 12 year old platform (Intel X58) has a new king in the machine. I have replaced my Liquid Cooled R9 Fury X with a Liquid Cooled Radeon RX Vega 64. Being that I am still on PCIe 2.0 and a 2008 Intel X58 platform in 2020 it’s going to be interesting to see how well my build performs with this much newer Vega 64. So you can say that this review is somewhat “unique” in its own way. I have a thing for legacy tech and we will see how well the old X58 and the 3 year old Vega 64 perform in 2020. If you haven’t checked out my Fury X - 5 year review in 2020 you should check that out as well for a quick comparison.




What is AMD's “Vega” Archetecture?


“Vega” is AMD’s GPU architecture that expands on their previous GCN architecture. Unlike previous implementations Vega makes some drastic changes to GCN. Vega marks AMD’s 5th entry into the GCN architecture. AMD relabeled their CU (Compute Units) to NCU (Next-Generation Compute Units). Similar to Fiji’s 64 CU, AMD’s Vega has 64 CU as well; however the performance has been greatly improved from an architectural level. By the way the brand names "Vega 64" and "Vega 56" are based on the amount or ROPs available to each GPU. Fiji based GPUs (Radeon R9\Fury Series) max shader performance was 8.5 TFLOPS while the Vega architecture reaches 13.7-TFLOPS. This jump in performance will vary depending on the type of Vega architecture that is used; for instance the Liquid Cooled version or air cooled "Vega 64" will hit 13.7-TFLOPS and the Air Cooled "Vega 56" version will hit 12.66-TFLOPs. This is due to efficiency, ROPs count, heat concerns and power constraints. AMD RTG (Radeon Technology Group) focused much more on power efficiency along with low latency with Vega. This is very important in the enterprise area.

Vega is built on the 14nm FinFET LPP for lower power usage. Unlike Fiji (28nm), Vega takes advantage of “Rapid Packed Math” which allows a theoretical throughput of 27.4-TFLOPS. Rapid Packed Math allows Vega to handle FP16-bits packed which doubles the throughput (13.7x2 = 27.4). It isn’t limited to just 16bits, but can also take advantage of 8-bits as well and of course 32-bits. AMD implements something named “IWD” - Intelligent Workload Distributor which aids in preventing the pipeline from stalling due to things such as context switching or smaller draw calls that might not completely fill the pipe. Speaking of the pipeline, AMD engineers went the extra mile to ensure that Vega became their highest clocked Radeon GPU (1,677Mhz) to date when it released in 2017 (Fiji was only 1,050Mhz and had limited overclocking potential). So this means increased pipelining, but this could also lead to detrimental results if pipeline bubbles or stalls appear during computation. AMD’s RTG engineers were able to keep the ALU working with only four stages while not impacting performance from all of the other “new” and dramatic changes to the Vega\GCN 5th Generation architecture.


Display and Image Performance


As far as displays goes Vega supports up to 8K @ 60hz using 32 bits and everything in-between. Using 64-bit HDR allows much higher refresh rates at higher rates such as 4K @ 120Hz, but can also go up to 8K @ 60hz as well. Display Port 4.1 and HDMI 2.0 is supported. Vega supports several different display modes, but the biggest are the x6 simultaneous - 4K\60Hz output, x2 - 4K\120hz, x3 – 4K\60hz(64 bit HDR), x3 - 5K\60Hz. Another technology known as Draw-Stream Binning Rasterizer (DSBR) was another selling point. DSBR is a “tiled” type of rendering meaning that it divides the image into tiles and adds them to batches, afterwards the GPU processes the data\batches one bin at a time. Apparently this takes only one clock in the pipeline per bin. DSBR has some controversy surrounding it in the gaming scene, but apparently it works on a “per game” bases and requires developers to support it in addition to AMD drivers updates. Then again I've read that DSBR just “simply works”, but who knows at this point. You’ll have to contact AMD to get more information or more direct answers surrounding the current state of the tech. It has been 3 years since the RX Vega 56\64 release so I'm sure most people have moved on and stopped waiting for more info from AMD on this topic. It was very nice feature that many people were looking forward to and left some users wondering about AMD marketing and false promises. It was not a good look for AMD, but some concluded that it was mostly for apps outside of gaming and catered towards the professional market.

There are tons of other architectural features such as HBCC (High-Bandwidth Cache Controller) which works well with AMD workstation GPUs such as the Radeon Pro SSG, but those were aimed mostly at the enterprise markets and prosumers. HBCC basically allows the GPU access to contiguous memory more effectively instead of relying on much slower dynamic system memory. Obviously this type of technology would need to be implemented on a program or video game bases along with the developers programming for the tech. You can think of it more as a “future” technology for gaming since most games won’t need more than 8GBs today. Gaming consoles could certainly benefit from this type of technology and I wouldn’t be surprised if we see this type of tech in the Playstation 5 or Xbox Series X. Obviously the professional market could make great use of HBCC. As far as power saving features goes, I will explain more of these features in the Power Consumption section of this article.

So basically Vega is AMD’s RTG (Radeon Technology Group) drastic change in direction from the traditional GCN architecture that they have been improving since approx. 2012 to 2016. The server, AI, enterprise and professional environment is where most of these technologies will shine, but gaming could also benefit from this type of tech if developers decide to invest in it. This technology marks an important step in AMD’s GCN technology moving forward and will allow AMD to expand rapidly and effectively with future updates to the architecture. Fortunately the future is here since I’m writing this Vega 64 review literally 3 years after its release in 2020. The future is the present and that is AMD’s new “RDNA” Architecture. There are many more decisions that AMD’s RTG decided to change with the Vega series, but this article is more about Vega 64 gaming performance and efficiency. I just thought I’d give this GPU a decent representation of the changes from the previous Fiji GPU (R9 Fury X) I ran for years. (which I reviewed and you can read by click here)




























Specifications


Here is a slide that gives all of the relevant information directly from AMD.


Gaming Rig Specs:

CPU:  Xeon X5660 @ 4Ghz
Motherboard: 
ASUS Sabertooth X58
RAM: 24GB RDIMM DDR3-1600Mhz [6x4GB] - ECC Buffered
SSD NVMe: 3TB - 2.7GB\s Read - 2.1GB\s Write
SSD NVMe: 256GB - 1.4GB\s Read - 600MB\s Write
SSD(x2): 256GB - 550MB\s Read - 500MB\s Write - RAID 0
HDD(x2): 2TB - 330MB Read - 320MB\s Write - RAID 0
HDD(x2): 2TB - 330MB Read - 320MB\s Write - RAID 0
PSU: EVGA SuperNOVA G2 1300W 80+ GOLD

GPU:  AMD Radeon RX Vega 64 Liquid Cooled - Push
GPU Speed:   (Stock) – Core: 1406Mhz  (1677Mhz Boost)
GPU Drivers:
 Radeon "Adrenalin " 20.4.2 [May 15th, 2020]


So as you can see I am running a 12 year old platform from Intel in 2020. I've nearly tapped all of the power out of this beast, but there is still some gas left in the tank. For gaming purposes it will be interesting just to see how well my platform (x58\2008) handles a high-end GPU (Vega 64\2017) that released literally 9 years later. The Fury X ran like I champ so I was expecting great results from its successor. The performance numbers always look great from AMD on paper, but the actual performance is what sell GPUs and get people excited. This article will shed some light on both, the Intel X58 and the Vega 64 performance in 2020 so perhaps this can show that the X58 is still a capable gaming rig. If anyone has shown how well the legacy X58 tech runs it definitely has been me. Just to think many people told me that I could not future proof my gaming rig way back in late 2000's....yeah right. I started this X58 craze 5 years after the X58 released in 2013 on a large level and I will more than likely be finishing it since I'm still running it as my daily rig. The main focus will be the AMD Radeon RX Vega 64 Liquid Cooled GPU.

I would also like to point out that I am using a slightly overclocked X5660 @ 4Ghz to give a fair representation of the Intel X58 performance. Most X58 users can hit 4Ghz easily. I'm also using a normal DRAM frequency as well. I feel that this is one of the easiest setups to run that most users can use as a reference.


Real Time Benchmarks™

Real Time Benchmarks™ is something I came up with to differentiate my actual in-game benchmarks from the built-in standalone benchmarks tools. Sometimes in-game - Internal benchmark tools doesn't provide enough information. I gather data and I use 4 different methods to ensure the frame rates are correct for comparison. This way of benchmarking takes a while, but it is worth it in the end. This is the least I can do for the gaming community and users who are wondering if the Radeon RX Vega 6 & the X58 can still play newly released titles in 2020 effectively. I have been performing Real-Time Benchmarks ™ for about 7 years now and I plan to continue providing additional data instead of depending solely on the Internal Benchmak Tools.


What is FPS Min Caliber™ ?

You will notice something named “FPS Min Caliber”. Basically FPS Min Caliber is something I came up to differentiate between FPS absolute minimum which could simply be a point during gameplay when data is loading, saving, uploading, DRM checking etc. The FPS Min Caliber™ is basically my way of letting you know lowest FPS average you’ll see during gameplay. The minimum fps [FPS min] can be very misleading at times since it may not always be noticeable. FPS min is what you'll encounter only 0.1% during your playtime and most times you won’t even notice it. Obviously the average FPS an average Frame Time is what you'll encounter 99% of your playtime.


What is FPS Max Caliber™ ?

FPS Max Caliber uses the same type of thinking when explaining the MAX FPS. Instead of focusing on the highest max frame you'll only see 0.1% of the time I've included the FPS max Caliber you can expect to see during actual gameplay.

With that being said I will still include both the Minimum FPS and the Max FPS. Pay attention to the charts since some will list 0.1% (usually for syntethic benchmarks), but normally I use 1% FPS Min for nearly all of my benchmarks. In the past I used the 97th percentile results, but now I just use 1% most of the time. I just thought I would let you enthusiast know what to expect while reading my benchmark numbers



























Sythetic Benchmarks
Unigine - Heaven, Valley & SuperPosition



Unigine's Heaven, Valley and most recently SuperPosition are very well known benchmarking and stability tools. Even though Unigine's Heaven intially released in 2009, later with Valley in 2013 and SuperPosition in 2017, all three tools gives gamers valuble information across several generations of different GPU architectures. The benchmarking tools are also gorgeous and Heaven still looks great by todays standards so don't be fooled by it's age, it still can grind the greatest GPUs released over the past 10 years today to their knees easily.


Superposition Benchmark v1.1

1920x1080p - High Score: 10,685
3840x2160p - 4K Optimized Score: 6,577



Unigine Heaven Benchmark 4.0

2560x1440p - Ultra Score: 2455
3840x2160p - Ultra Score: 1041


Valley Benchmark v1.0

2560x1440p - Ultra Score: 4108
3840x2160p - Ultra Score: 1998





Real Time Benchmarks™


Superposition Benchmark v1.1 - In-depth Analysis


Similar to previous Unigine benchmarks tools, Superposition includes a “game” mode where you can load up the scene in any resolution and control the camera. Not only can you control the camera, but you can control different elements in the scene as well such as the time of day slider. SuperPosition has areas in this mode where you can interact with objects in the game; interactive objects such as pulling a lever to change the time of the day or picking up chalk to write on a chalkboard.

Going a little deeping I decided to analyze the game mode where I could actually control different elements in the scene. I loaded up the game mode and walked around the scene while pulling several levers for fun. Here are my results after 2 in a half minutes of gameplay using the built-in 4K Optimized preset.































Sythetic Benchmarks
3DMark - Time Spy & FireStrike



Fire Strike v1.1 - Normal

Normal Overall Score: 17,920
Normal Graphics Test 1: 121.22 FPS
Normal Graphics Test 2: 96.01 FPS
Normal Physics Test: 43.33 FPS
Normal Combined Graphics: 29.25 FPS


Fire Strike v1.1 - Extreme

Extreme Overall Score: 10,217
Extreme Graphics Test 1: 60.50 FPS
Extreme Graphics Test 2: 42.54 FPS
Extreme Physics Test: 49.47 FPS
Extreme Combined Graphics: 21.52 FPS

Fire Strike v1.1 - Ultra

Ultra Overall Score: 5,667
Ultra Graphics Test 1: 31.32 FPS
Ultra Graphics Test 2: 21.31 FPS
Ultra Physics Test FPS: 43.32 FPS
Ultra Combined Graphics: 12.61 FPS




Time Spy v1.2

Normal Overall Score: 7,126
Normal Graphics Test 1: 53.84 FPS
Normal Graphics Test 2: 42.51 FPS
Normal CPU Score FPS: 16.17 FPS




























Sythetic Benchmarks
Crytek's Neon Noir - Ray-Tracing




Crytek released their Ray Tracking Benchmark in late 2019. Ray Tracing has been the talk amogst gamers over the past few years and has been gaining wide-spread attention. Neon Noir works on AMD and Nvidia GPUs by using Crytek's Total Illumination tech within their CryEngine software. Crytek produces some of the most graphically stunning games and Neon Noir is no exception, but the main focus here is Ray Tracing. Let's see how well the Radeon RX Vega 64 handles Ray Tracing in 2020.






Neon Noir - In-depth Analysis


Obviously the RX Vega 64 LC does much better than my old Fury X. The performance at 4K is much smoother and puts up decent numbers for a 3 years old card running on my old X58 platform.





























Wolfenstein II: The New Colossus



We have seen the Synthetic benchmarks and now we can focus more on actual gameplay benchmarks. I wanted to start this segment of the article off with a bang. Wolfenstein II doesn’t have an internal benchmark tool so I benched the first level of the game to give a representation of what the RX Vega 64 and the X58 can do in 2020. I was so impressed with the results that I re-benched the game several times the following day. Even after all of that I decided to load up the game and capture 4K gameplay so that everyone can see how well the RX Vega 64 performs. Please watch the YouTube video below and as most Youtube creators say, don’t forget to hit that like button if you enjoy the content. With that being said don’t forget to share this article as well if you enjoy the content. I would appreciate it.



Real Time Benchmarks™





Wolfenstein II: The New Colossus 4K Gameplay Footage




























Resident Evil 2




The original (1998) Resident Evil 2 is one of my favorite video game releases of all time and the remake has quickly gone to the top of one of my favorite games to date. This remake of the original RE2 is just as great as the GameCube remake of Resident Evil 1 (REmake). This game is also optimized very well and Capcom has provided us with a seamless experience. There are no loading screens for a large portion of the game once you start playing. Even when it comes to the cutscenes and events the game continues to flow smoothly. Beyond the awesome gameplay this title sports a lot of graphical features that are easy to comprehend and set.

My Fury X couldn’t handle two graphical settings when set to their max settings @ 4K (“Shadows” and “Mesh” settings) so I was forced to use the very next setting “High”. Using the “High” setting was great since it was the very next setting and the image quality was still superb. The other limitation I faced was the 4GB vRAM on the Fury X. Although I could run the game with max textures (8GBs), this had a bad effect at higher resolutions such as 4K. At lower resolutions such as 1080p and and 1440p using max textures wasn’t that bad. Using "SSAO" also ran better on my Fury X.

All of that changes with the RX Vega 64. I decided to use the “Max” Preset which obviously maxes all of the graphics settings, but I used HDAO this time around instead of SSAO. I now have a 8GB vRAM buffer and plenty of horsepower to drive this game. That last statement doesn't mean that the Fury X (Fiji) was “bad”, but hitting that 4GB vRAM buffer was rather easy which required lowering several settings in many games. The much newer Vega architecture has updated tech that handles certain workloads better now when it comes to lighting and textures. Therefore I maxed all of the settings and benchmarked. This game uses up to 12.63GBs of vRAM or at least that's what the game "claims" to use'.


Resident Evil 2 - vRAM Requirements @ 4K









Resident Evil 3






RE3 is more up to date since it release a few months ago in April 2020. Following the same settings above in the RE2 benchmarks, I used the same "Max" Preset with HDAO in RE3. Overall the game plays just as well as RE2.



























Middle-earth: Shadow of War





Real Time Benchmarks™






Hitman 2







Real Time Benchmarks™




IO Interactive was one of the few developers to actually take a AAA title and give it access to "modern" technology. Back in 2016 IO Interactive developed async compute support within Hitman and took advantage of AMD's GPU hardware features. In my original Hitman (2016) benchmarks with my Fury X I saw an increase of up to 34% @ 4K when DX12\async comptute features were used correctly. This increase didn't come from overclocking, but directly from drivers and developer updates. IO Interactive tried their hardest to implement async compute effectively and they did a great job. You can read my Hitman (2016) performance increases here (click here to read my 2016 Fury X Performance Increases)

Hitman 2 performs very well and is using DX12 with async comptute. The Vega 64 runs very smoothly and is never starved for data. I wish more developers would take advantage of features like async compute, but I guess the majority of console titles will be using the modern tech more effectively than PC titles at this point.


























Shadow of the Tomb Raider




Shadow of the Tomb Raider, developed by Eidos Montréal, marks Lara’s final chapter based on the 2013 reboot. Shadow of the Tomb Raider also takes advantage of AMD’s modern technology (async compute features). The last Tomb Raider title had major DX12 async compute issues and frame pacing issues shortly after release (click here to read about the issues I experienced). Eidos eventually got all of their problems resolved and DX12\async compute performed very well (I also show the improved results in the previous linked article). This time around everything appears to be fine in Shadow of the Tomb Raider and DX12 has not been giving me any issues.


Real Time Benchmarks™







Shadow of the Tomb Raider - Internal Benchmark Test





As you can see above the Vega 64 performs very well on my old X58 platform and the CPU @ 4Ghz has no problems pumping out information. The CPU is literally screaming for more. The gameplay is very smooth like every other game I've benchmarked in this article.



























Red Dead Redemption 2



Red Dead Redemption 2 was one of the most anticipated games of 2018. Sadly the PC version came more than 12 months later in Q4 2019. That’s still a win for us PC gamers and we will gladly take a late release over “no” release. RDR2 has a plethora of graphical settings to adjust and please don't take that as an understatement.

Most of Rockstar games have plenty of graphical settings to adjust which is a great thing since there are so many different generations of graphic cards in the market. However, sometimes too many options can become cumbersome, especially if you don’t have a reference were you can actually see the settings that you are changing in real time. Games such as Resident Evil 2 and Shadow of Tomb Raider give you an on screen image reference where you can easily see what effects graphical settings have on the image quality. You don’t get that feature in Red Dead Redemption 2 and I think it would help greatly when tweaking pages, and I mean pages, of graphical settings from only a black menu screen.

Luckily the game has a “preset” slider than can adjust all of the settings from you. The developers set the graphics settings automatically when using the “Quality Preset Level” therefore I decided to use the highest preset possible which is the “Favor Quality” preset. I also used the Vulkan API naturally, with TAA and FXAA, but I disabled MSAA since MSAA usually isn’t needed at high resolutions in the types of games I play.


Real Time Benchmarks™






Crysis 3


Crysis 3

Can it run Crysis? Well yes it can. I decided to load up the first level and do a run through while causing as much chaos as possible. There were plenty of explosions, gunfights and dead enemies. The settings I used are listed below.


Real Time Benchmarks™


Level - Post Human





Optimizing Crysis 3 Performance


Typically I would just run Crysis 3 maxed and call it a day; but if you ever wanted to gain a nice performance boost in Crysis 3 without degrading the image quality you can lower 2 settings to get an instant boost. Those two graphical settings are “Post Processing” and “Shading”. Once I lowered those two settings from “Very High” to “High” I received a nice boost in Average FPS and Min FPS across the board @ 4K. The game still looked great and here are the results.





The Average FPS increased 36% and the Minimum FPS increased by 33% which is pretty nice considering I lowered two settings from "Very High" to simply "High" with no visual degradation.



























Deus Ex: Mankind Divided



Real Time Benchmarks™






The Witcher 3



Real Time Benchmarks™





























Apex Legends


Real Time Benchmarks™








The Legend of Zelda: Breath of the Wild @ 4K (Cemu - v1.19.3)




This game was a request in my last Fury X review and I have decided to include the RX Vega 64 review as well. I have also included YouTube 4K footage below as well. Apparently this was one of the hardest games to run @ 4K at some point in time and I'm always up for challenge regardless of my legacy and limited technology. Let's take a look and and see how well the game performed during my playthrough.


Real Time Benchmarks™



The Legend of Zelda: Breath of the Wild 4K Gameplay Footage



























RX Vega 64 Power Consumption


So far the Radeon RX Vega 64 LC has been a great upgrade to replace my R9 Fury X LC, but there are a few concerns. However, those concerns can easily be solved as I will explain below. Power consumption has always been one of AMD’s Achilles’ heels on both the CPU and GPU side. AMD has resolved their CPU power consumption mostly due to smaller CPU fabrications and outside help. The GPU power consumption is much better than it used to be and AMD is known to push the wattage above what is truly needed. So undervolting became a big thing for AMD GPUs over the years since it allowed users to save money on their electricity bill and worry less about heat output. Undervolting naturally led to underclocking for stability reasons, but the most people want the “full” performance of the GPU boost clocks, so normally people would undervolt to a certain point while leaving the core clock alone. With the Vega architecture AMD has focused on providing many built-in and manually accessible power saving features.

Radeon Tuning Controls and Presets

AMD's Radeon Software has three “Tuning Control” settings: Automatic, Preset and Manual for the RX Vega 64. Within the “Automatic” and “Preset” settings AMD provides “Auto Tuning” and “Tuning Preset” setting respectfully. The “Manual” Tuning Control obviously has no preset menu since you will be tuning the GPU, HBM, Fan and Power Limits manually.

For lower wattage usage the “Preset” with the “Power Saving” Tuning would be the best setup for daily use and most gaming sessions as we will see in my charts throughout this area of the article.




As you can see above the RX Vega 64 typically only pulls 3 watts during daily usage (from the GPU chip only). In all of my benchmarks in this article I used the “Automatic” & “Default” settings to give a normal representation of the performance. This mode will run the GPU in its standard mode that you’ll expect out of the box and of course you won’t get any massive power savings. However, this is where my problems began with this awesome GPU. While benchmarking Shadow of the Tomb Raider (Internal Benchmark Tool) I watched my entire Gaming Rig (Overclock CPU\RAM\MB etc.) literally hit a “peak” of 673watts. That is simply ridiculous, but remember that I was running a decently overclocked CPU\RAM\MB and my rig isn’t the most power efficient being 12 years old (Intel X58). During my Hitman 2 benchmark I hit a “peak” of 611watts. Once again that’s crazy high. Now remember that these are “peaks” from my entire gaming rig so you won’t always be pulling those many watts on average, but it’s still worth noting. I would also like to include that these “peak” power wattage results are found while running unrealistic stressful benchmark tools in-game and doesn’t represent actual gameplay scenarios. You won’t always need to run your card at full speed in every area or scene in the majority of games. The RX Vega 64 LC reports to pull 265watts (GPU Chip) using stock settings with no power saving features enabled.

From this point I knew that I couldn’t run the Vega 64 using the stock settings and started exploring the power saving features that AMD touted during their Vega marketing rollout. Unlike my previous R9 Fury X 2020 review were I used 3DMark FireStrike, this time around I have decided to use a more up to date benchmarking\stress tool from Unigine. So let’s take a look and see how well AMD power saving features work in the Superposition Benchmark (v1.1).



Focusing for a moment on the stock X58 + X5660 + Vega 64 LC we see that Unigine Superposition v1.1 pulls roughly 556-560watts easily without the Power Savings. Simply enabling the power savings features as we viewed above in the screenshot (“Preset” > “Power Saving”), when can see a decrease of approximately 84watts. The peak can still reach 561watts and that behavior was expected since the Power Saving features must balance performance and power saving features effectively. What’s more important is the average wattage that you will be pulling from your outlet. So everything is already looking good without touching any manual settings. Now let’s take a look at an overclocked X58 + X5660 @ 4Ghz.



Now we turn our focus to an overclocked X58 + X5660 @ 4Ghz while using the RX Vega 64 Power Savings feature. I ran a series of test at 4K and 1440p using the “Extreme Preset” to give a nice balance of GPU and CPU workloads. Using the 4K (3840x2160p) Extreme Presets we can see that my Gaming Rig is pulling roughly 598 watts with a peak of 604 watts. Using the power saving mode we can see a decrease of around 108watts, which lowers the average down to 490watts which is a very good start. The RX Vega 64 reported that it was using 265watts @ 4K, but when the power savings were enabled the RX Vega reported only 197watts was being used. So with power savings we can see the RX Vega 64 save around 68watts.

Now the 2560x1440p Extreme Preset pretty much shows the same results. We have a drop of around 98watts @ 1440p. AMD also has a feature that they call “Radeon Chill” which is basically a FPS Limiter. I limited the FPS to 30fps and we can see that my Gaming Rig pulls a much lower amount of wattage from the outlet.

I would like to state that for Power Saving reasons, using “Radeon Chill” FPS Limiter feature by itself is pointless without using the Power Saving Preset from my experience; unless you are using the FPS Limiter for other reasons such as screen tearing. Basically if you only use the FPS Limter you will limit your FPS as expected, but you won’t get any power saving benefits. So using them both in tandem would ideal. At the very least you’ll want the Power Saving features enabled for most gaming sessions.

Now I would like to take a look at the Power Saving features and the "Radeon Chill" FPS Limiter in an actual gaming session. I will be using Resident Evil 2 for this test.




Looking at the numbers you can clearly see a benefit across the board for the Power Saving features. Using the max wattage as a reference you can see that by simply enabling the Power Saving Preset I was able to slash my wattage by 168watts during actual gameplay. The RX Vega 64 went from pulling 265watts down to 200watts on the GPU chip so that's great even without the FPS Limit. Using the Radeon Chill feature along with the FPS Limiter set to 65FPS I was able to drop my power consumption by another 67watts to only 360watts max. Then by limiting my FPS to only 60FPS I dropped down another 15watts with the peak wattage of 345watts.

Let’s not forget that the power saving features are not also great for the GPU wattage usage, but great for the GPU heat output as well. While playing Resident Evil 2 with a 60FPS limit the RX Vega 64 only used an average of 76watts on the GPU. The temperatures averaged 35c as well. RE2 is easy game to run since it is optimized very well and obviously the Vega doesn’t have to sweat at all with low FPS.

The same can be said when playing Wolfenstein II: The New Colossus. I used the Power Saving Preset along with Radeon Chill to limit my FPS to 100FPS. My temps average 35c and my GPU wattage usage averaged only 73watts. When gaming @ 4K I limited my FPS to only 65fps and the RX Vega 64 used only 195watts which is down from 265watts

So the RX Vega 64 power consumption started off with some really wild results, but as we can see there are several simple ways you can instantly save a lot of wattage and lower the RX Vega 64 heat output. If you can live with limiting your FPS to 60fps – 100fps that can help you greatly during long gaming sessions and will definitely save you money on your electric bill. You can take it a step further and manually setup several undervolting and underclocking settings within the AMD Radeon software Tuning settings. AMD has went the extra step and allowed us more control over the HBM performance as well. I’ll need more time with the card to tweak settings manually and understand what this card requires more clearly. Overall I am impressed with the power consumption controls that we have access to.


Here is a screenshot of the manual settings that you can configure.



























RX Vega 64 & R9 Fury X Comparison


Now that I’ve finally shown all of the RX Vega 64 benchmarks I would like to show a comparison to the R9 Fury X. I’ve included several games in the chart below to show my performance increases. I am able to run the Vega 64 with higher graphical settings due to its larger 8GB vRAM upgrade. The Fury X is a solid 1440p GPU, but at higher resolutions in most games the 4GB buffer is easy to fill up. The RX Vega 64 doubles that 4GB to 8GBs and has plenty left in the tank. If developers take advantage of things such as asynchronous compute the Vega 64 and the Fury X can both perform even better.

In RE2 @ 1080p we can see the Average FPS increase by 43%. The Minimum FPS increase by a whopping 123% as well. If we move to RE2 @ 4K we can see that not only can I run the game even smoother, I also gain 140% increase in the Average FPS category. Once again my Minimum FPS increases by a whopping 387.5% @ 4K.



In Wolfenstein II @ 4K the Vega 64 flat out lays the Fury X on its back with a knockout punch – No contest! My Wolfenstein II @ 4K Average FPS went up 105% and the Minimum FPS show up 174%. As previous stated, no contest when it comes to 4K gaming. That’s not to say that the Fury X isn’t a solid GPU at 1440p and 4K, but the Vega 64 is a worthwhile upgrade for anyone running the Fury X if 4K gaming with higher refresh rates are a priority.

Shadow of the Tomb Raider reaches an incredible 178% increase in the Average FPS with the Minimum FPS increasing by 192%. On that note the FPS Max increase by 192% as well.

You can compare other games by checking out my Fury X 2020 review (click here to view benchmarks). The settings are slightly different for some games, but there are a few titles that share the same graphical settings. This is a great upgrade for anyone running a Fury X. If you can find a Vega 64 for a great price it might be worthwhile. AMD’s Navi release is right around the corner and expected this year, but who knows if it will release or not. AMD has a track record of delays and COVID-19 isn’t going to make anything better.



























RX Vega 64 Compared Against
RX 5700 XT vs RTX 2060S & 2070S vs GTX 1080 & 1080Ti


I have received several requests from readers to compare the RX Vega 64 against several similar priced GPUs. I have gathered samples from several sites to compare several GPUs from AMDs and Nvidia’s latest mid-range and high-end sections. It’s worth noting that all of my benchmarks were used running a 12 year old Intel X58 with PCIe 2.0 tech from 2007 so the Vega 64 results might be slightly better on newer Intel and AMD platforms. Shadow of the Tomb Raider had the most samples out of all of the games I was able to find. I’ve chosen one Nvidia Gameworks title (The Witcher 3) and one AMD Sponsored title (Wolfenstein II) with three mostly neutral titles (RDR2, Hitman 2, SoTR) that performs well on both Nvidia and AMD GPUs. Without further ado lets take a look at the results.


Hitman 2

3840x2160 (4K)

0% RX Vega 64
+7% RTX 2070 Super
+7% GTX 1080 Ti
+26% RX 5700 XT
+30% RTX 2060 Super
+37% GTX 1080


Let’s start this comparison off on a good note for the RX Vega 64. We can see that the RX Vega 64 has a pretty good lead @ 4K over the other GPUs in Hitman 2. The 2070 Super and GTX 1080 Ti is only approximately 7% slower. The RTX 2060 Super and the GTX 1080 are far behind as expected. Seeing the GTX 1080 at the bottom during these comparisons will become a regular occurrence in all of my charts.



Wolfenstein II: The New Colossus

Wolfenstein II - 2560x1440p

-10% RTX 2070 Super
-3% GTX 1080 Ti
0% RX Vega 64
0% RX 5700 XT
+7% RTX 2060 Super
+29% GTX 1080

Wolfenstein II - 3840x2160 (4K)

0% RX Vega 64
0% RTX 2070 Super
+5% RTX 1080 Ti
+11% RX 5700 XT
+18% RTX 2060 Super
+40% RTX 1080


Wolfenstein II is an AMD sponsored title, but all of the GPUs work well with this game for the most part. The Vega 64 comes in 3rd place at 1440p and trails the GTX 1080 Ti by 5fps. The RTX 2070 takes the top spot. The RX 5700 XT matches the Vega 64 with the RTX 2060 Super being 5fps behind the Radeon RX’s. The GTX 1080 isn’t close and takes the bottom spot again.

At 4K the RX Vega 64 takes the top, but it basically ties with the RTX 2070. The GTX 1080 Ti falls behind by only 4fps. The 5700 XT performs well and slightly better than the RTX 2060 Super by only 5fps. The GTX 1080 performs well, but it’s still holding its own at the bottom losing to the RX Vega 64 by 40%.



Red Dead Redemption 2

Red Dead Redemption 2 - 2560x1440p

-9% RX 5700 XT
-7% RTX 2070 Super
0% RX Vega 64
+2% GTX 1080 Ti
+8% RTX 2060 Super
+31% GTX 1080

Red Dead Redemption 2- 3840x2160 (4K)

-5.2% RX 5700 XT 
0% RX Vega 64
+5% RTX 2070 Super
+9% GTX 1080 Ti
+15% RTX 2060 Super
+41% GTX 1080

Moving on the Red Dead Redemption 2 we see that the RX 5700 XT takes the lead at 1440p and 4K. At 1440p the RTX 2070 Super is right there with the 5700 XT and it’s nearly a tie. The Vega 64 and the GTX 1080 Ti only trails by 4 to 5 fps.

Looking at the 4K results we see both the RX 5700 XT and the RX Vega 64 take the top spots. The GTX 2060 Super does well in both resolutions. GTX 1080 trails the Vega 64 by 31% at 1440p and 41% at 4K making its value plummet even further which I will discuss below in the “Price” section.


Shadow of the Tomb Raider

Shadow of the Tomb Raider - 2560x1440p

-14% RX 5700 XT
-11% RTX 2070 Super
-7% GTX 1080 Ti
0% RX Vega 64
+4% RTX 2060 Super
+19% GTX 1080

Shadow of the Tomb Raider - 3840x2160p (4K)

-15% RTX 2070 Super
-13% RX 5700 XT
-10% GTX 1080 Ti
0% RX Vega 64
+3% RTX 2060 Super
+22% GTX 1080


in Shadow of the Tomb Raider we see that the RX 5700 XT performs the best at 2560x1440p. The RTX 2070 Super is right on heels being only 3 frames away. I used the “Internal Benchmark” found within Shadow of the Tomb Raider for my RX Vega 64 results not my “Real-Time Benchmarks”. My RX Vega 64 does pretty good and trails the GTX 1080 Ti by only 5 fps.

At 3840x2160p (4K) we see that Nvidia’s RTX 2070 Super take the lead with the RX 5700 XT and the GTX 1080 Ti nearly matching it. The RX Vega 64 is now only 4fps behind the GTX 1080. The RTX 2060 Super does well. The GTX 1080 trails at the bottom.


The Witcher 3

The Witcher 3 - 2560x1440p

-18% GTX 1080 Ti
-14% RTX 2070 Super
0% RX Vega 64
+2% RTX 2060 Super
+4% RX 5700 XT
+11% GTX 1080

The Witcher 3 - 3840x21600p (4K)

-26% - GTX 1080 Ti
-8% RTX 2070 Super
0% RX Vega 64
+6% RX 5700 XT
+9% RTX 2060 Super
+9% GTX 1080

The Witcher 3 normally favors Nvidia GPUs and the developers implemented Nvidia Gameworks tech. Looking at the 2560x1440p results, as expected, we Nvidia GPUs take large lead. The GTX 1080 Ti takes the top spot and has a 16 frame lead over my RX Vega 64. The Vega 64 barely edges out the RTX 2060 Super and RX 5700 XT. The GTX 1080 even does well this time around, but it’s still at the bottom.

The GTX 1080 Ti lead diminishes from 16fps to 13fps @ 4K above my RX Vega 64. The RTX 2070 Super lead drops to only 4fps over my RX Vega 64. The Vega 64 still manages to slightly outperform than the RTX 2060 Super and the RX 5700 XT. The GTX 1080 performs very well in this Nvidia sponsored title.


Pricing and Comparison Conclusions




You won’t find the RX Vega 64 and the GTX 1080 at a reasonable new price since they released a long time ago. You can find them decently priced in the used market and that brings me to my next point. The GTX 1080 literally came in last in ALL of my samples that I’ve compiled above from several sources. However, the GTX 1080 sells far higher than the RX Vega 64 in used market. You can find the RX Vega 64 for around $225 - $250 in the used market, while the GTX 1080 sells for around $330-$380. Buying a GTX 1080 over the RX Vega 64 would be an awful decision if you take price and performance into account as I’ve shown above, but people are clearly buying them. The GTX 1080 is 27% slower and even 31% slower if you exclude The Witcher 3 results, yet it is at least 47% or more expensive. I used the cheapest GPU average prices from all categories when comparing the prices so that's why you the "+" sign on the end. Some GPUs might actually be more expensive if than the RX Vega 64 if you can't find the cheaper Vega 64 GPUs. Now as far the power usage and heat output goes that’s up to user to decide how they want to balance the price when taking those stats into account. The GTX 1080 wasn’t known as Nvidia’s “coolest” card either and I remember it being pretty dang warm back in the day when it released. Even then I don’t think anyone should be purchasing the GTX 1080 in 2020 as there are many better options at those prices. The GTX 1080 Ti would be a much better choice over the normal GTX 1080. A used GTX 1080 Ti can be purchased for around the same price as a used GTX 1080, but the “Ti” version can come at a premium used price so keep that in mind.

The best “bang for the buck” would be the RX Vega 64 based on the performance numbers presented above, but there are a ton of other games out there. I simply tried to get as many samples as I could from several popular titles. I mean for only $225 - $250 you can easily saves hundreds or at least $100 beneath the RX 5700 XT which basically matches the RX Vega 64 in the games above. The RX 5700 XT would probably be the best GPU to spend money on overall though at this point if you really need a powerful and cheap GPU. They can be purchased around $320 - $470. The RX 5700 XT is obviously more updated and has better power consumption. With a starting price around $320, that makes it only around $70 more than the 'highest' priced used RX Vega 64 on Ebay. However, $70 might be a lot for some people for practically the same performance overall. So the Vega 64 would still be fine choice for those looking to spend less than $250 on a used GPU with great 1440p and 4K max graphical performance.

The RX 5700 XT would also be a better purchase over the RTX 2070 Super due to the price differences. The RTX 2070 can be found on most e-tailers for around $520 upwards towards $750+. There are many AIB variants and you’ll have to look out for crazy prices from re-sellers. You could save $200 dollars if you got the cheapest RX 5700 XT compared to the cheapest RTX 2070 Super brand new from what I’ve seen today in the month of July (2020). Since both of these cards are easy to find and still being produced I did not check the used market. If there are better priced RTX 2070 Super’s feel free to let me know in the comment section since I looked at only a few of the popular e-tailers.

The RTX 2060 Super is kind just “there”. Nvidia just didn’t want to give AMD the entire mainstream and mid-tier market so the RTX 2060 Super is just there to hopefully take some sales from the 5600\5600XT & the 5700\5700XT. It’s normally at the bottom right above the GTX 1080. The RX 5700 XT would probably be as better purchase over the RTX 2060 Super since they sell around the same price.

So there you have it, the RX Vega 64 seems to be the best bang for the buck since they can be purchased in the used market at the cheapest prices with great performance. I have seen some of them sell as low as $100 which crazy great value. The RTX 2070 Super and the RX 5700 XT performs nearly the same, but AMD has the better price per performance ratio when it comes to new GPU purchases between those two. The GTX 1080 Ti still packs a punch, but the used price might come at a premium price in some cases, but the RX 5700 XT can be purchased brand new in the same price range as used GTX 1080 Ti’s. Don’t even think about the GTX 1080 unless you play exclusively Nvidia sponsored titles and just want to own the GTX 1080 for some reason in 2020. These prices are certain to change for the RX Vega 64 after I release this article. It never fails. If you have a different opinion feel free to post them in the comment section on the next page.



























Conclusion


We have finally reached the end of my RX Vega 64 2020 review. With approximately 64 individual benchmarks I can honestly say that the Vega 64 is a solid GPU. Since this is my first time reviewing the RX Vega 64 I decided to give it more of a “full” review, even though my Vega 64 review literally 3 years late. I can only review and benchmark what I have in my possession. Despite a few flaws with the power consumption the card performs very well. You should also remember that I am running an Intel X58 and the power consumption will be much higher than modern gaming rigs. I also explained how to alleviate the power consumption issues as well with little to no performance lost during gaming sessions. Using Radeon Chill (FPS Limiter) will also make those power saving features even better.

AMD’s Radeon Vega 64 was a late to the party, but I explained some of the reasoning behind this at the beginning of this article. The fact that we can still use this GPU on legacy platforms is great. I haven’t had any success getting the Radeon VII working on my X58 so moving forward I hope AMD doesn’t forget out the legacy crowd using much older tech. As far the Fury X performance goes the Vega 64 would be a great upgrade for anyone who wants more 4K performance at hopefully a decent price in the used GPU market. AMD had a few marketing errors during the Vega 64 reveal and release. AMD teased the Vega series for an extremely long time and we know how fast paced the technology field moves. DSBR questions and issues didn’t help AMD either. The last thing AMD needed was shady marketing and promises that they can’t explain or provide. AMD also has to ensure that they don't let their old driver issues creep up and bite them again moving forward. There has been many discussions and concerns about AMD driver support over the past several months or so. With AMD Navi GPUs on the way AMD needs to ensure that they will continue to support their long range of high end GPUs for more than a few cycles.



AMD’s Vega was targeting several markets with one release and I understand AMD has to “please” different markets during their GPU rollouts. Vega along with Navi and RDNA has shaped AMD potential in the future. Now we just need developers to take advantage of modern tech such as low level API functions. It has been 5 years and the adoption rate for modern tech such as Async Compute has been rapid on consoles rather than the desktops. This is sad to see, but at least we can see AMD trying to create effecient technology to make our hardware more effective.

With all of that being said I think that the RX Vega 64 was a good step in a positive direction as far as performance goes. AMD has moved on from the Vega architecture now, but it still packs a punch in many gaming scenarios 3 years after release in 2020. As far as the X58 performance goes, it still holds up well in 2020. Even though I’m limited to only PCIe 2.0 x16 bus and only 6 Core\12Threads on a 32nm + X58 platform I can’t complain. Just take a look at my Wolfenstein II gameplay YouTube footage and you can see how well it performs @ 4K. I have definitely gotten my money’s worth on this X58 platform, with the R9 Fury X and now with the RX Vega 64.


Thank you for reading my Radeon RX Vega 64 Review - Kana's FineWine Edition.

Feel free to leave a comment below and feel free to share this article.