Westmere-EP - X5660 Full Review

Comments Disqus

Introduction

Update - 2020: X58 + AMD Radeon RX Vega 64 Review: Click Here

Update - 2020: X58 + AMD Radeon Fury X Review: Click Here

Update - 2017: X58 + AMD Radeon Fury X Review: Click Here

Update - 2015: X58 + AMD Radeon Fury X Review: Click Here

  It was suggested to me that Intel’s best platform to date could be the X58-LGA1366. From the looks of it, that suggestion may have been correct. Moving into its sixth year in the market; the legacy X58\Tylersburg is still alive and kicking. There appears to be plenty of life in the platform now that high-end server microprocessors are more affordable. This review is mainly for those who are on the fence and thinking about upgrading to X79 or possibly the X99. I also understand that Haswell-E is right around the corner, but some users might not want to upgrade unless they absolutely have to. Some users can’t always buy the latest and greatest. Personally I can, but only if I feel as if I’m getting a lot more than what I already have.


   To most X58 users Intel’s X79 felt like a “side-grade” instead of an upgrade. I’m not saying X79 doesn’t offer a lot, but is it worth the price at this point? The architectures are obviously different. However, the X58 now has upgrades that cost less than $150-$200 that can easily even up the playing field a bit. Hex-cores are available and more affordable now. Unlike Intel latest Xeons [Sandy & Ivy Xeons], which have locked straps, LGA1366 has the ability to overclock Xeons by increasing the BLCK and\or CPU ratio. I’m sure many users are hoping to add as many years to the awesome X58 platform as possible. Many will tell others to upgrade, but not so fast. I’ve taken the time to compare my Xeon X5660 and L5639 to Intel’s latest and greatest high end CPUs.

   I cover the CPU benchmarks and gaming benchmarks. I also added something I like to call “Real-Time Benchmarks” which is for gaming. Instead of running a benchmark tool, I literally capture the frame times and frame rates from actual gameplay. I try to play at least 25 minutes or longer to give a good review, but sometimes I can't always hit the 25 minute mark [depends on the level and\or gamemode]. I also try to select the most demanding levels. For an example, my This email address is being protected from spambots. You need JavaScript enabled to view it. struggled to play Crysis 3 maxed @ 1080p. There was constant micro stutter and bottlenecking. After I installed my L5639 and later, the X5660, Crysis 3 is much more playable and runs at a smoother rate. I show the actual data from my play through. The differences are night and day.

Moving on, I have made a brief chart comparing the X58 architecture to the X79 architecture.

Kana Maru X58 vs X79

  Now you can see why a lot of X58 users felt like this platform was a side grade. PCI-E 2.0 still has plenty of bandwidth for high end cards. There have been a lot of reviews that proves that there is a minor difference between PCI-E 2.0 and 3.0. X58 gamers can still enjoy high end gaming as usual. So hopefully my review will help X58 users that might want to make a minor upgrade to their existing system, rather than upgrading to a new build.


My PC Specs:
Motherboard: ASUS Sabertooth X58
CPU: Xeon X5660 @ 4.8Ghz
CPU Cooler: Antec Kuhler 620 Push/Pull
GPU: GTX 670 2GB 2-Way SLI - Reference Model
RAM: 12GB DDR3-1600Mhz [6x2GB]
SSD: x2 128GB RAID 0
HDD: x4 Seagate Barracuda 7,200rpm High Performance Drives [x2 RAID 0 setup]
PSU: EVGA SuperNOVA G2 1300W 80+ GOLD
Monitor: Dual – Res- 1080p, 1400p, 1600p
OS: Windows 7 64-bit
Note: All test prior to July 2014 were performed with a 7,200 HDD containing the main OS.

Kana Maru Workstation & Gaming Rig


 

 

 

 

 

 

 

 

 

Westmere-EP - X5660

Overclocking the X5660 & Cinebench Benchmarks

 

X5660 Stock clocks and minor BLCK overclock
I ran a few quick tests in Cinebench R11.5 after I first installed this chip. I scored a 7.71pts in Cinebench R11.5 @ stock clocks– 3.2Ghz [x24]. One of the impressive things about this chip is that the voltage was extremely low. What was even more impressive was the idle frequency. 1.6Ghz as the idle speed. So I performed a minor increase to the BLCK. I pushed it from 133 to 166. I only increased the BLCK and left everything else set to auto. I was able to hit 3980.50Mhz [x24] very easily or in other words 4Ghz. This only required 1.22vCore! I noticed the idle clock speed increased to 2Ghz [1990Mhz]..

Obviously the 4Ghz will down clock to 3.8Ghz after using more than two cores. @ 4Ghz [3.8Ghz -x23] I scored 9.64pts in Cinebench R11.5. Not too bad at all. At this speed the Xeon L5639 with a highly clocked BLCK @ 228 [and other settings] running @ 4.1Ghz is only 10% faster than the X5660 @ 4Ghz\3.8Ghz-BLCK 166. With a minor bump in BLCK and with ZERO other changes the X5660 was already looking better than my lovely L5639. If you were to compare my X5660 @ 4Ghz\3.8Ghz [x23] w/ BLCK 166-1600Mhz RAM to Intel’s i7-4960X @ stock clock, the difference in speed is only 14.8%. This is extremely minor if you are looking to upgrade from X58 to X79 and may not be worth it at this point if you don’t plan to heavily overclock. This CPU definitely has plenty of headroom. Continue reading more for the Cinebench R11.5 overclocked settings below.

L5639 RECAP+Xeon Info

Taking another look at the Xeon L5639 vs Several Intel Stock clocks

As the title suggest, I'm comparing an overclocked This email address is being protected from spambots. You need JavaScript enabled to view it. to stock High-End Intel CPUs. The i7-4770K isn't "that" high-end, but I have seen people upgrading to Haswell. This should give LGA 1366-X58 users an ideal comparison to stock higher end CPUs. I personally like to compare my overclock CPU to Intel’s latest stock clocks. It really helps me decide if an upgrade is really worth it or not. Now does this mean the Xeon L5639 or Xeon X5660 would perform better clock for clock [?] of course not, due to the fact that the CPUs use different architectures to perform. Obviously people who still use their X58 as their main gaming and workstation platform will be looking to overclock the Xeon L5639 and the Xeon X5660. The highest constant overclock I could achieve with a reasonable vCore while using the Xeon L5639 was 4.1GHz. I've gathered several stock clock benchmarks from reputable review sites. So let's see how a overclocked Xeon L5639 compares to several CPUs.



Obviously the latest and greatest Intel CPUs will overclock better. They also cost over $800-$1000.00. While the L5639 [$70-$150] and X5660 [$150-$200] are affordable now. So the cheaper L5639 does pack a nice punch for those who are still running the X58. If you can manage to reach a high BLCK with the L5639 you’ll see that the i7-4960X [stock] is only 4.2% faster than the overclocked L6539. The Xeon X5660 has a higher x24 CPU multiplier which makes it easier to overclock. The Xeon L5639 has a x20 multiplier. The multiplier fluctuates [L5639 [x20]+X5660 [x24]-Xeons] with the amount of active cores.

 

 For instance, using the Xeon L5639 as a example: Cores 1 & 2 will operate at x20. Once the 3rd OR 4th core becomes active the multiplier will drop to x19. When Core 5 OR 6 is active it will drop the multiplier to [x18] and so on. x16 is the lowest multiplier. Some motherboards can lock the CPU multiplier\CPU Ratio to x16 or x18. The following x19 and x20 can only be enabled if you have the C-state functions. So the x18 CPU Ratio should be your main focus. The only way to overclock this CPU is to increase the Ratio and the BCLK and various settings in the BIOS. With all of this being said, the 1366\2011 i7 "X or K" counterparts can and will be unlocked; allowing a much easier overclock. Therefore the L5639 takes some patients to overclock past BLCK 200-215 due to the low multiplier. From what I’ve read from several users; hitting 4Ghz is pretty easy for the average overclocker. I can easily tell you that the L5639 is pretty easy to overclock if you plan to use the C-States. Most X58 motherboards can move the BLCK upwards towards 200Mhz with minor issues. Which would put most around 3.8Ghz to 4.0Ghz with the x20 multiplier.

 



  Before jumping right into the review about this benchmark section, I would like to point out my performance increase. Coming from the i7-960 I have seen a huge performance increase. The Bloomfield’s are pretty damn hard to OC past 4.2Ghz, mostly due to several limitations and voltage issues. It’s hard to get the i7-960 past 4.1-4.2Ghz without some serious cooling and high vCore [or the golden chip]. My performance gains in CinebenchR11.5 were a breathtaking 76.1% if you compare my old This email address is being protected from spambots. You need JavaScript enabled to view it. to the This email address is being protected from spambots. You need JavaScript enabled to view it. . Now that’s what I call a upgrade.

X5660

  The i7-4960X has a difference of 9% when compared to the This email address is being protected from spambots. You need JavaScript enabled to view it. . Remember that I’m only running DDR3-1600Mhz RAM. The Xeon X5660 is pretty impressive. At 1.36v I was able to hit 4.6Ghz. This voltage is right outside of Intel’s recommended max voltage of 1.35v. I was able to get a score of 11.89pts @4.6Ghz in Cinebench R11.5. This would put the i7-4960X @ 4.4Ghz only 13% higher than the This email address is being protected from spambots. You need JavaScript enabled to view it. . When I pushed the BLCK to 209 and increased the vCore to a stable 1.43v, I was able to hit 4.8Ghz rather easily. This is outside of Intel’s max [only 0.08v], but safe enough for me to test and play games without worrying for hours. You’ll definitely want aftermarket cooling if you plan to overclock this CPU heavily.

  So once again the This email address is being protected from spambots. You need JavaScript enabled to view it. vs the X5660 @ 4.8Ghz difference is only 9%. I’m only running DDR-1600Mhz RAM. So I’m sure if you run faster RAM with tighter timings you can make the 9% even smaller, possibly 7% or less. I can say I’m pretty impressed. Between the This email address is being protected from spambots. You need JavaScript enabled to view it. vs the This email address is being protected from spambots. You need JavaScript enabled to view it. , the different is roughly 17%; easily making the X5660 the better choice for X58 users who don’t want to spend a lot on legacy technology.

 

L5639

  Now let's even up the playing field a bit. I have included some overclocked examples to give you a better representation of the “Locked” L5639. The i7-4960X @ 4.4GHz is 27.1% faster than the L5639 @ 4.1Ghz while running DDR3-1333Mhz. 27.1% might not be enough to make a ton of X58 users to run out and spend approx. $1,059.00 for the latest and greatest CPU plus more for the latest platform MB. Most L5639 users should be able reach 4Ghz rather easily with the x20 multiplier and low vCore. For those who manage to reach 4Ghz or 4.1Ghz with the x18 multiplier; you’ll definitely get great results while playing games. Those who can reach 180Mhz-200Mhz [BLCK] will be just as happy. This CPU definitely gets the job done. Just be sure to leave the C-States enabled.

Cinebench R11.5: Clock for Clock - 4.8Ghz Comparison

  After a recent request was made I decided to post the clock for clock comparisons. Instead of comparing the This email address is being protected from spambots. You need JavaScript enabled to view it. to the lower clocked Sandy Bridge-E and Ivy Bridge-E; I have posted the clock for clock comparisons @ 4.8Ghz for the i7-4960X, i7-3960X and the X5660 in Cinebench R11.5. Remember that the i7-4960X and the i7-3960X have faster RAM, newer architecture and faster single core speed. It took awhile to find the This email address is being protected from spambots. You need JavaScript enabled to view it. so it must pretty rare. I threw the Quad This email address is being protected from spambots. You need JavaScript enabled to view it. in the mix to give those running Bloomfield’s below that clock speed an idea of the potential upgrade percentage. Getting Bloomfield’s pass the 4.0-4.2Ghz can be a challenge.



This email address is being protected from spambots. You need JavaScript enabled to view it. + DDR3-1866Mhz = 14.58 [-17.7%]
This email address is being protected from spambots. You need JavaScript enabled to view it. + DDR3-2134Mhz = 13.82 [-11.6%]
X5660 @4.8Ghz + DDR3-1600Mhz = 12.38 [0.0%]
i7-920 @4.4Ghz + DDR3-1600Mhz = 7.41 [+67%]

  I originally I wrote the Cinebench 11.5 review for the L5639 comparison and added the X5660 results. This should provide a better comparison for those looking to upgrade to the X5660. The X5660 still holds it’s ground. Clock for clock coming within 11.6% of the highly clocked 3970X is pretty damn good. The 3970X was taken from HWBOT as well. The i7-4960X increased from 13% to nearly 18%. The X5660 is still within 17.7% of the i7-4960X. The comparison is still a bit one sided since I’m running legacy tech and using lower memory speed. I’m still impressed with the X5660. The X5660 is 67% faster than the This email address is being protected from spambots. You need JavaScript enabled to view it. . You don’t find a lot of i7-920 running 4.8Ghz without nearly ruining the chip. The i7-4960X is a whopping 96.7% faster than the i7-920. Even if the i7-920 was running 4.8Ghz I’m sure the 4960X would still stomp it by at least 80%. With all of that being said I hope this answers more unasked questions.

 

  Cinebench R15





i7-3970X @ 4.9Ghz = 1252 cb
Xeon X5660 @ 4.8Ghz = 1110 cb
Xeon L5639 @ 4.1Ghz = 965 cb

 

  There are a lot of Cinebench R15 scores available. Cinebench loves faster RAM. I chose the i7-3970X that is ranked on HWBOT. The i7-3970X is 12.8% faster than my X5660. The i7-3970X is running DDR3-2423Mhz and once again I’m running 1600Mhz with my X5660. Cinebench loves fast RAM so those numbers can easily change for both processors. It’s hard finding units that match my RAM setting so I went with the processor speed.

   The 3970X is roughly 30% faster than the L5639 running DDR3-1333Mhz. For only $70 [L5639] that’s pretty damn good for nearly 6 year old technology. The i7-3970X processor retailed for $1,039.99 and currently $700-$900. The numbers look good, but the performance increase is what really matters to me. 12.8% [X5660] increase isn’t going to make me run out and upgrade my PC. You’ll have to also add the price of the new MB and CPU. Not to forget to mention coolers and other things needed when changing platforms\MBs. obviously enthusiast will always have that upgrade itch. Maybe the X5660 can ease the pain for a little longer.

 



Xeon X5660 Performance Increase [+] \ Decrease [-]

Multi-Core - Overclocked:
i7-4960X @ 4.4Ghz = 42967 [-12.5%]
i7-3970X @ 4.6Ghz = 41359 [-8.3%]
X5660 @ 4.8Ghz = 38162 [0.0%]
i7-4770K @ 4.6Ghz = 36644 [+4.1%]
i7-3770K @ 4.8Ghz = 32738 [+15.1%]
L5639 @ 4.1Ghz = 32627 [+16.5%]
i7-920 @ 4.4Ghz = 25143 [+52%]


Singe Core - Overclocked
i7-4770K @ 4.6Ghz = 9288 [-33.5%]
i7-3770K @ 4.8Ghz = 8467 [-21.7%]
i7-4960X @ 4.4Ghz = 8037 [-15.5%]
i7-3970X @ 4.6Ghz = 7699 [-10.7%]
X5660 @ 4.8Ghz = 6953 [0.0%]
L5639 @ 4.1Ghz = 5862 [+18.6%]

Cinebench R10 is pretty old, but still useful. Well the Open GL isn’t that useful for me, but the CPU benchmark scores are. In the Multi-core test the Xeon L5639 actually does pretty well. The i7-4960X is 31.7% faster than the Xeon L5639 @ 4.1GHz and 71% faster than the i7-920 @ 4.4Ghz. The i7-4770K [Quad-Core] clearly outperforms the other CPUs core for core in Cinebench R10 Single Core. The i7-4960X is only X5660 12.5% better in the Multi-Core benchmark. I7-920 and i7-960 users will definitely see a lot of performance gains if they choose to upgrade to the X5660 or the L5639.

 


 

 

 

 

 

 

 

 

 

Westmere-EP - X5660 Full Review

WinRar, 7-zip, AIDA & Benchmarks

 

WinRar 4.20

 

  Moving on to the WinRar v4.20 benchmark, I’m comparing the X5660, L5639, and the i7-960. The results were amazing if you consider my i7-960 results I only ran my X5660 @ 4.6Ghz during this test. Here are the results:



Xeon X5660 @ 4.6Ghz = 16,458
Xeon L5639 @ 4.1Ghz = 12,441
i7-960 @ 4.2Ghz = 8,519

  Users running the i7-920 or i7-960 will see a nice upgrade when it comes to extracting files. The L5639 performed very well for a chip that sells for $75-$150, especially if you are looking to upgrade your Quad-core on the X58 platform. Upgrading from my i7-960 to the X5660 gave me a 93% performance increase in WinRar v4.20 benchmark for approx. $200. The upgrade was worth the purchase so far. WinRar isn’t the best benchmarking tool for multi-cores and relies heavily on tighter RAM timings. I can unpack 1.17GB with the X5660 in less than 10 secs. I’ve also ran 7-zip benchmark as well. Check below.

 

  7-zip v9.20

 



7-zip shows some interesting results. The Decompressing performance difference is less that 1% between the This email address is being protected from spambots. You need JavaScript enabled to view it. and the This email address is being protected from spambots. You need JavaScript enabled to view it. . The X5660 has actually taken the lead in the Decompress test now. The exact difference now favors my X5660 by 0.15%. When it comes to the compressing benchmark, the difference is only about 3%. I expected a wider gap, but that wasn’t the case here. The i7-4960X is 16.3% faster than the L5639.

 

AIDA64

CPU Queen:

i7-4960X @ 4.7Ghz = 81760 [-7.8%]
I7-3960X @ 4.7Ghz = 81402 [-7.3%]
X5660 @ 4.8Ghz = 75826 [0.0%]
L5639 @ 4.1Ghz = 64727 [+17.1%]
i7-4770K @ 4.6Ghz = 59950 [+26.5%]
i7-3770K @ 4.6Ghz = 59385 [+27.7%]

CPU PhotoWorxx:
i7-4960X @ 4.7Ghz = 24876 MPixel/s [-45.1%]
X5660 @ 4.8Ghz = 17142 MPixel/s [0.0%]
L5639 @ 4.1Ghz = 15218 MPixel/s [+12.6%]

CPU ZLib:
i7-4960X @ 4.7Ghz = 598.8 [-23.6%]
X5660 @ 4.8Ghz = 484.3 [0.0%]

FPU VP8:
i7-4960X @ 4.7Ghz = 8392 [-23%]
X5660 @ 4.8Ghz = 6814 [0.0%]
L5639 @ 4.1Ghz = 5897 [+15.5%]

FPU SinJulia:
X5660 @ 4.8Ghz = 9955 [0.0%]
i7-4960X @ 4.7Ghz = 9483 [4.9%]
L5639 @ 4.1Ghz = 8506 [17%]

 

The X5660 holds its own in a lot of the test. The i7-4960X is king since its using newer technology. The X5660 actually beat the 4960X in one of the test by 4.9%. That’s pretty rare considering the architecture both processors are using. Overall I can’t complain.

 


 

 

 

 

 

 

 

 

Westmere-EP - X5660 Full Review

Performance Test 7 & 8 Benchmarks

Performance Test 7

 

  I could not find enough results to compare with Performance Test 7. Therefore I simply compared the X5660 to the L5639. This could give those looking to upgrade a decent comparison. Check below for the latest Performance Test 8 results.

 

 

Performance 8

  Let's take a look at Performance 8. In this benchmark I compared the X5660 and L5639 with several baselines from similar clock speeds. I tried to get the best results with systems that are similar to one another.

Xeon 5660

  At the time of this posting Performance Test 8 is not working properly. I have tried to reinstall the program, but I cannot resolve the issue. Therefore I cannot compare my X5660 @ 4.8Ghz with other processors running near the same speed and using the same memory speed. However, you can visit Performance Test web-site to compare my score to other scores. The website states that the graphs are updated once per day when there are at least 2 samples available.

Visit this web-site: www.cpubenchmark.net/high_end_cpus.html

X5660 @ 4.8Ghz – DDR3-1600Mhz CPU Mark: 12937

  If I can find some CPUs with similar setups I will compare them and update this section of the review.

 

L5639 @ 4.1Ghz

  Intel latest platforms have advantages over X58. The architecture is definitely different and quicker. However, that doesn't mean that the L5639 can't compete. You just can't expect the L5639 to win in every category. In this case it would be up to the user to decide. Upgrading from a i7-920 @ 4.2Ghz to a L5639 would be worth it, but would it be worth a X79+i7 upgrade? Judging from Performance 8 CPU Mark I personally wouldn't upgrade just yet.....well unless you are running a i7-920 or i7-960.

L5639 CPU Mark: Score
Performance Increase [+] \ Decrease [-]

--- Intel Core i7-4960X @ 4.2Ghz = 14853 [-34.4%]
--- Intel Core i7-3970X @ 4.0GHz = 13117 [-18.7%]
--- Intel Core i7-3770K @ 4.4GHz = 11786 [-6.6%]
--- Intel Core i7-4770K @ 4.2GHz = 11405 [-3.2%]
--> Intel Xeon L5639 @ 4.1Ghz = 11050 <-[0.0%]
--- Intel Core i7-920 @ 4.2Ghz = 7962 [+38.8%]

 


 

 

 

 

 

 

 

 

Westmere-EP - X5660 Full Review

X5660 Power Usage Charts and Results

Here are my power usage test for the X5660 + my rig. I originally planned to run Triple or Quad GPUs, but never got around to it. It is still a possibility, but 2 GPUs have been more than enough for now. A user asked me about the voltages from 3.8Ghz to 4.6Ghz. I haven’t gotten around to running the entire tests just yet so please stay tuned for more updates as the days & weeks pass. I tried my best to give you guys a good estimate. I also recorded the max voltage as well.

Let’s not forget that I’m running a fairly fully loaded system, oh and I’m planning to upgrade & add more. I’m planning for a custom water cooled system with newer [water cooled] GPUs. My 4,000rpm Delta’s can pull up to 17.4 watts a piece. Let’s not forget about my four Scythe Gentle Typhoon AP-31’s [5,400rpm] that can easily pull 13.68watts each and that’s the good news. The bad news is that I need 32.28watts just to start up 1 AP-31 fan! I tried to mix it up a little bit while running various test. As I said above I’ll continue to update my charts and info throughout the weeks.

 


-Dark Grey =  Average wattage
-Light Grey = Max \ Peak wattage

Stock


Note: "Powered Off" wattage includes 2 monitors, speakers and other electronics

Overclocked @ 4.6Ghz

Sorry if I didn't run enough test. It takes a long time to perform a lot of the benchmarks in this topic and I get paid nothing to do it. However, I love to do what I do. I work with computers, servers, routers, switches, UPS's, printers and so many different types of  equipment & technology on a regular basis. So if I'm ever a little slow with updating just bear with me. I'll continue to mix up the test as well.

I'm no Electrician or power saving expert [although I have to learn a good amount in my field], but I would suggest unplugging your PC if the power switch is easily accessible. My PC uses around 8.7watts while the PC is powered off. Very minor and I'm still unplugging the PC since those watts add up every 30-31 days.

 


 

 

 

 

 

 

 

 

Westmere-EP - X5660 Full Review

X5660 vs High-End SB-E & IVB-E Benchmarks & Comparisons

  I’ve finally got around to creating an account and uploaded some HWBOT scores. Since it’s going to be a challenge to find high end X79-i7’s running DDR3-1600Mhz - 1670Mhz, I’ll compare clock speeds and a few other things. Most of my benchmarks are running 4.8Ghz /w DDR-1670Mhz and one test is running using 4.6Ghz is running DDR-1600Mhz. I’m going to compare the current #1 CPU [Hexa-Core only] against my CPUs in terms of performance percentage. After the comparison I’ll elaborate on the difference and if it is worth the upgrade.

   Then I’ll go for a lower ranked X79-high-end-CPU running 4.8Ghz + [looped or traditional] water cooling + similar vCore. I’ll start from my score and work my way upwards until I find what I’m looking for. I feel that it wouldn’t be fair to compare my build against other builds using extreme measures to cool their entire MB\CPU [DICE, LN2, Phase Change etc]. However, when I compare the #1 CPUs will ignore the type of cooling. My complete specs are on the first page. I’m using the Antec Kuhler 620 to cool my CPU.

Here is the Comparison Chart:

Kana Maru HWBOT comparison 6 9 14

HWBOT scores as of 6 8 14

  Alright so there you have it. This definitely took awhile to setup and compare, but I think it was worth it in the end, I'm so glad I didn't spend $2000.00 on a X79 build that I was thinking about upgrading to late last year. I've decided to skip the R9 290x right now. My GTX 670s are doing just fine. I think I'll wait for new GPU architectures to release. I still think my build will be fine until Skylake-E releases. That's what I'm trying to hold out for. It appears that some of the scores decrease one you go above 6 cores\12Threads in a lot of the test from what I've seen. I'm sure this will change in a few years.


 

 

 

 

 

 

 

 

 

Westmere-EP - X5660 Full Review

Update - 2020: X58 + AMD Radeon RX Vega 64 Review: Click Here

Update - 2020: X58 + AMD Radeon Fury X Review: Click Here

Update - 2017: X58 + AMD Radeon Fury X Review: Click Here

Update - 2015: X58 + AMD Radeon Fury X Review: Click Here

 

OLD GTX 670 SI results below:

  Alright here are my Gaming Benchmarks. I will post more benchmarks as I complete them. I’m going to be comparing my X5660, L5639 and i7-960 benchmarks. If you would like to compare your scores please post them. I tried to remember some my i7-960 scores. 4Ghz-4.2Ghz is all most gamers will need anyways with Hex-cores. The X5660 will be a better choice if you stream games online in HD. It should be able to handle the load and specific settings in certain streaming programs for high end CPUs.

GTX 670 2GB 2-way SLI @ 966Mhz [Boost: 1228Mhz] [except 3DMark Cloud Gate and Vantage which was ran @ stock 915Mhz]

Note:Some of these test have been updated in the "DDR3- 1600Mhz vs 1900Mhz vs 2000M Mhz Performance % Comparisons" section.

3DMark Fire Strike and 3DMark 11 Results w/ 337.88 drivers



Fire Strike - This email address is being protected from spambots. You need JavaScript enabled to view it. = 11462
3DMark 11 - This email address is being protected from spambots. You need JavaScript enabled to view it. = P16832

 

3DMark 11



This email address is being protected from spambots. You need JavaScript enabled to view it. = P16449
This email address is being protected from spambots. You need JavaScript enabled to view it. = P15692
i7-960@4Ghz wasaround P12000-P13000

Looks like the GTX 670 2GB 2-Way SLI is pushing max with the Hex cores. The X5660 increased my Graphics [+174 points], Physics [+1,736 points] and Combined Score [+809 points]. If I could get both GPUs to run @1280Mhz I’m sure I could probably crack the 17000 mark. The benchmarks were smooth with Hex-CPUs. To the naked eye I couldn’t notice any difference. My GPUs are running on air so heat did become an issue.

3DMark Vantage


This email address is being protected from spambots. You need JavaScript enabled to view it. = P47325
This email address is being protected from spambots. You need JavaScript enabled to view it. = P45164


Here is another minor difference in score.

3DMark Ice Storm



This email address is being protected from spambots. You need JavaScript enabled to view it. = P197876
This email address is being protected from spambots. You need JavaScript enabled to view it. = P173914
i7-960@4Ghz = P157635

 

Now we can see some bigger differences in the score and performance %. The X5660 scored 26% higher than the i7-960. The X5660 performed 14% better than the L5639. The X5660 performed pretty good with the GTX 670s @ 915Mhz.

3DMark Fire Strike



This email address is being protected from spambots. You need JavaScript enabled to view it. = 11205
This email address is being protected from spambots. You need JavaScript enabled to view it. = 10900
i7-960@4Ghz =9787

 

Using Hex cores definitely makes the benchmark much smoother. The X5660 offered a 14.4% increase over my i7-960@4Ghz andonly 2.7% over the L5639.

 

RealBench V2 Leaderboard

 

  I was on Twitter recently and notice a post from Asus. The post spoke about their “RealBench V2 Leaderboard”. I thought I’d give it a shot and compare it to the number 1 leader in the world. I did even better; I uploaded my score to Asus website. The number 1 ranked user is running an i7-3960X @ 5.3Ghz-DDR3-2400Mhz and a GTX 780 Ti. Here are the results and comparisons to my X5660.

i7-3930K @ 5.3Ghz-DDR3-2400Mhz + GTX 780 Ti = 111001[+26%]
Xeon X5660 @ 4.8Ghz-DDR3-1670Mhz + GTX 670 2GB Reference 2-Way SLI = 87926[0.0%]

   Being that my Gaming Rig is dated [6 year platform], it’s only 26% slower than the i7-3930K highly overclocked with faster RAM and faster Overclocked GPUs. According to Realbench v2 if I were to upgrade my build to the X79 platform with a highly 5.3Ghz OC CPU, faster 2400Mhz RAM and a pair of highly clocked GPUs, I would only gain 26% in performance. In my opinion this is definitely not worth the price of upgrading. I would come off cheaper by keeping my current platform, adding faster RAM and adding GPUs better than the GTX 670 2GB reference models. I’ll be sure to upload my results whenever I upgrade my GPUs. As of right now being within 26% is good enough for me to keep supporting my current platform.

 




 


 


 


 


 


 


 


Westmere-EP - X5660 Full Review


Real Time Benchmarks™


X5660 + X58 + AMD Fury X 2015 Results Here: 
https://overclock-then-game.com/index.php/benchmarks/8-amd-fury-x-review

 

X5660 +X58 + AMD Fury X 2016 Results Here:
https://overclock-then-game.com/index.php/benchmarks/15-crimson-relive-16-12-1-several-games-benchmarked-4k

OLD GTX 670 SLI results below:

  Real Time Benchmarks™ is something I came up with to differentiate standalone benchmarks tools from actual gameplay. I basically play the games for a predetermined time or a specific online map. It all depends on the game mode and the level\mission at hand. I tend to go for the most demanding levels and maps. I capture all of the frames and I use 4 different methods to ensure the frame rates are correct for comparison. In rare cases I'm forced to use 5 methods to determine the fps. it takes a while, but it is worth it in the end.

   These test should give those who have similar high-end setups a good comparison between the stock and overclock CPU performance. For those who don't have a similar setup these benchmarks will let you see if the X5660 is worth the upgrade. Some games are more CPU dependent than others. I'm only running the GTX 670 2GB reference 2-way SLI. I'm hoping to upgrade to a Quad\SLI\Crossfire X setup on my X58 platform. I just have to choose between Nvidia or AMD. In the meantime this is how the X58 is running in 2014+ with some of the latest and greatest high end games available.



 

Middle-earth: Shadow of Mordor [Very High + Medium Texture Quality] – 2560x1600p
- Note: Unofficial SLI Support -


GTX 670 2GB 2-Way SLI @ 988Mhz [Boost: 1228Mhz] [344.11 WHQL Drivers]
This email address is being protected from spambots. You need JavaScript enabled to view it.
RAM: DDR3-1600Mhz
Gameplay Duration: 41 minutes 17 seconds
Captured 393,643 frames
FPS Avg: 41fps
FPS Max: 97fps
FPS Min: 15fps
FPS Min Caliber ™: 29fps
Frame time Avg: 22.5ms

-FPS Min Caliber?-
You’ll notice that I added something named “FPS Min Caliber”, well that is if anyone even look at these charts nowadays. Basically FPS Min Caliber is something I came up to differentiate between FPS absolute minimum which could simply be a data loading point during gameplay etc. The FPS Min Caliber ™ is basically my way of letting you know lowest FPS average you’ll see during gameplay. The minimum fps [FPS min] can be very misleading. I plan to continue using this in the future as well.

Now to the results, this is much better in terms of playability. There was no button lag or stuttering and the game was smooth. Very High + Medium Texture Quality still looks amazing. No complaints from a 2GB GPU user. Also remember that this game doesn’t officially support SLI just yet, but I have managed to get it working on my PC.
 
Middle-earth: Shadow of Mordor [Ultra Settings + HD Texture Pack] – 2560x1440p
- Note: Unofficial SLI Support -






GTX 670 2GB 2-Way SLI @ 988Mhz [Boost: 1241Mhz] [344.11 WHQL Drivers]
This email address is being protected from spambots. You need JavaScript enabled to view it.
RAM: DDR3-1600Mhz
Gameplay Duration: 33 minutes 29 seconds
Captured 49,095 frames
FPS Avg: 25fps [24.58fps]
FPS Max: 65fps
FPS Min: 6fps
FPS Min Caliber ™: 12fps
Frame time Avg: 40.5ms


-FPS Min Caliber?-
You’ll notice that I added something named “FPS Min Caliber”, well that is if anyone even look at these charts nowadays. Basically FPS Min Caliber is something I came up to differentiate between FPS absolute minimum which could simply be a data loading point during gameplay etc. The FPS Min Caliber ™ is basically my way of letting you know lowest FPS average you’ll see during gameplay. The minimum fps [FPS min] can be very misleading. I plan to continue using this in the future as well. [/I]

In order to run this game with the HD Texture Pack you need at least a 6GB GPU. Obviously I have a 2GB GPU, but hell I ran the test anyways. Also SLI isn’t currently supported in this game. I did find a away to get it to work with both GTX 670 cards. As you can see above the cards just can’t handle this game at 1400p + Ultra. I’ll run another test at 1080p, but being that the texture pack requires a 6GB card I’m not sure how much more I’ll gain.

The Ultra settings won’t help my cause much either. GTX 670 users will need to stick to “Medium Texture” settings and “Very High” Graphic Quality settings. The game still looks really good regardless. As far as Ultra Settings & HD Content goes, it’s playable, but the experience is pretty bad. There’s plenty of button lag, random micro lag, stutter, and the random low FPS doesn’t help either. 40.5ms frame time is simply unforgiving. 2GB users should stick with Very High and Medium Texture settings for sure.







Watch Dogs [Ultra Settings + E3 Graphic Mods] – 1920x1080p
MSAA x2






GTX 670 2GB 2-Way SLI @ 988Mhz [Boost: 1228Mhz] [337.88 BETA Drivers]
This email address is being protected from spambots. You need JavaScript enabled to view it.
RAM: DDR3-1600Mhz
Gameplay Duration: 14 minutes 5 seconds
Captured 90,355 frames
FPS Avg: 68fps
FPS Max: 131fps
FPS Min: 21fps
Frame time Avg: 9.30ms









Watch Dogs [Ultra Settings + E3 Graphic Mods] – 2560x1440p
MSAA x2







GTX 670 2GB 2-Way SLI @ 988Mhz [337.88 BETA Drivers]
This email address is being protected from spambots. You need JavaScript enabled to view it.
RAM: DDR3-1600Mhz
Gameplay Duration: 22 minutes 59 seconds
Captured 55,939 frames
FPS Avg: 41fps
FPS Max: 55fps
FPS Min: 9fps
Frame time Avg: 24.7ms








Tomb Raider [Ultra] - 3500x1800p

 
Tomb Raider [Ultra] - 3500x1800p
StockGTX 670 2GB 2-Way SLI @ 988Mhz [337.50 BETA Drivers]
This email address is being protected from spambots. You need JavaScript enabled to view it.
RAM: DDR3-1600Mhz
CPU Average: 35c
CPU Max: 45c
Ambient Temp: 25.5c
CPU Usage Avg: 4.9%
CPU Usage Max: 28.4%
Gameplay Duration: 15 minutes 21 seconds
Captured 49,318 frames
FPS Avg: 54fps[53.51]
FPS Max: 70fps
FPS Min: 20fps
Frame time Avg: 18.7ms
 









Battlefield 4 100% Maxed [Ultra] - 3500x1800p
StockGTX 670 2GB 2-Way SLI @ 988Mhz [Boost: 1228Mhz] [337.50 BETA Drivers]
This email address is being protected from spambots. You need JavaScript enabled to view it.
RAM: DDR3-1600Mhz
CPU Average: 48c
CPU Max: 59c
Ambient Temp: 24c
CPU Usage Avg: 17%
CPU Usage Max: 49%
Gameplay Duration: 10 minutes 36 seconds
Captured 17,241 frames
FPS Avg: 43fps[42.97]
FPS Max: 116fps
FPS Min: 7fps
Frame time Avg: 22ms
 











Battlefield 4 100% Maxed [Ultra] - 2560x1440p
StockGTX 670 2GB 2-Way SLI @ 915Mhz [Boost: 1221Mhz] [337.50 BETA Drivers]
This email address is being protected from spambots. You need JavaScript enabled to view it.
RAM: DDR3-1600Mhz
CPU Average: 44c
CPU Max: 57c
Ambient Temp: 22c
Gameplay Duration: 34 minutes 46 seconds
Captured 141,055 frames
FPS Avg: 69fps
FPS Max: 147fps
FPS Min: 32fps
Frame time Avg: 14.5ms








Battlefield 4 100% Maxed [Ultra] - 2560x1600p
StockGTX 670 2GB 2-Way SLI @ 915Mhz [Boost: 1221Mhz] [337.50 BETA Drivers]
This email address is being protected from spambots. You need JavaScript enabled to view it.
RAM: DDR3-1600Mhz
Gameplay Duration:1hour 3mins 58secs
Captured 235,043 frames
FPS Avg: 62fps[62.25]
FPS Max: 139fps
FPS Min: 30fps
Frame time Avg: 15.2ms











Thief @ 1920x1080 - Very High [100% Maxed] - Stock CPU
Stock GTX 670 2GB 2-Way SLI @ 966Mhz [Boost: 1228Mhz]
X5660 @ 3.2Ghz
RAM: DDR3-1333Mhz
Gameplay Duration: 25 minutes 36 seconds
117,328 Frames Captured
FPS Avg: 76fps
FPS Max: 97fps
FPS Min: 19fps
Frame time Avg: 13.1ms

I’ve been so busy with the new RAM that I haven’t had time to post my Thief Very High Settings. Running stock speed gave me some pretty good results. There was minor stuttering during the loading process, but nothing major. The game was smooth I had no issues at all. This was expected since the game is using the Unreal Engine 3. While the engine is 10 years old it still looks fantastic. The Xeon handled this game well at stock speeds, but this isn’t it’s full potential.
 
 

Thief - Overclocked Xeon

Thief @ 1920x1080 - Very High [100% Maxed] 

Stock GTX 670 2GB 2-Way SLI @ 966Mhz [Boost: 1228Mhz]
X5660 @ 4.6Ghz
RAM: DDR3-1600Mhz

Gameplay Duration: 22 minutes 19 seconds
145,111 Frames Captured
FPS Avg: 108fps
FPS Max: 116fps
FPS Min: 25fps
Frame time Avg: 9.22ms

My overclocked Xeon still proves that Hexa cores are great for gaming. I increased in just about every stat I recorded. This time around I didn’t capture the CPU temp or the CPU Usage. I gained 32 frames per seconds on average and dropped my frame time down 3.88ms. This game is looking very nice. I can’t wait to see what the Unreal Engine 4 will bring regarding graphics. Ignore the fps that reads off the chart, those high FPS readings are from the loading screen.
 



Star Swarm Stress Test [Benchmark Tool v1.0] – Extreme - Attract @ 1920 x 1080
Stock GTX 670 2GB 2-Way SLI @ 966Mhz[Boost: 1241Mhz]
X5660 @ 3.2Ghz
RAM: DDR3-1333Mhz[
CPU Avg: 17%
CPU Max: 41%
CPU Temp Avg: 29c
CPU Temp Max: 32c
Room Ambient: 21c
Gameplay Duration: 6 minutes
7494 Frames Captured
FPS Avg: 20fps
FPS Max: N/A [Benchmark Tool]
FPS Min: 3fps
Frame time Avg: N/A [Benchmark Tool]
 

Star Swarm Stress Test [Benchmark Tool v1.0] – Extreme - Follow @ 1920 x 1080
Stock GTX 670 2GB 2-Way SLI @ 966Mhz[Boost: 1241Mhz]
X5660 @ 3.2Ghz
RAM: DDR3-1333Mhz
CPU Avg: 15%
CPU Max: 32%
CPU Temp Avg: 29c
CPU Temp Max: 33c
Room Ambient: 21c
Gameplay Duration: 6 minutes
9296 Frames Captured
FPS Avg: 26fps
FPS Max: N/A [Benchmark Tool]
FPS Min: 3.5fps
Frame time Avg: N/A [Benchmark Tool]
 

Star Swarm Stress Test [Real Time Benchmarks] – Extreme - Follow @ 1920 x 1080
Stock GTX 670 2GB 2-Way SLI @ 966Mhz[Boost: 1241Mhz]
X5660 @ 4.6Ghz
RAM: DDR3-1600Mhz

CPU Avg: 12%
CPU Max: 25%
CPU Temp Avg: 46c
CPU Temp Max: 51c
Room Ambient: 21c
Gameplay Duration: 6 minutes
13905 Frames Captured
FPS Avg: 39fps
FPS Max: 124fps
FPS Min: 5fps
Frame time Avg: 25.9ms

Alright so I’ve ran more test for those wondering if the Hex cores are worth it for gaming. In this test I’ve overclocked my GTX 670 2GB 2-Way SLI to 966Mhz which boost up to 1241Mhz to take on the Star Swarm Stress Test. It was recently updated on Steam. So it’s a pretty decent GPU overclock. I’m comparing my X5660 @ 3.2Ghz-DDR3-1333Mhz and 4.6Ghz-DDR3-1600Mhz. The first two test were ran using only the Benchmark Tool\Stress Test and I did not run my personal tests. However, I did perform my Real Time Benchmarks along with the Star Swarm Benchmark Tool in the last test [4.6Ghz-1600Mhz].


3.2Ghz DDR3-1333Mhz Results:
The X5660 @ 3.2Ghz & overclocked GTX 670s with the Extreme Preset appears to playable. When there are a lot of things happening on the screen the game simply drops the frame rate sharply. You can be at 30-40fps and the next second you are rapidly dropping to 10fps; even worse 3 fps. It seemed like a slow motion scene in the Matrix. I was using the benchmark tool so I’m guessing the frame times were near the 180ms-200ms mark. This could simply be a performance issue with the GTX 670 2GB limitation and specs. This benchmark is definitely demanding and should bring most cards to their knees.


4.6Ghz DDR3-1600Mhz Results:
Now to lift any bottlenecking I’ve OC’d my CPU. Anyone running 3.8Ghz Hex-Core and higher should have no major bottlenecks. I also run my CPU overclocked to 4.6Ghz because it only requires 1.36vCore which is safe as it gets with higher overclocks. With all of that being said this Benchmark Tool does rely on a decently clocked Intel CPU or AMDs APU to even it up with better results. I’m comparing the Extreme - Follow @ 1920x1080 results. If you look at the comparisons above you’ll see some pretty interesting results. The most obviously being that the Extreme - Follow @ 1920x1080presets showed an increase of 13fps. From 26fps [3.2Ghz] to 39fps [4.6Ghz]. That’s actually really good. That’s a 50% increase! The frames per second aren’t the only tell-tale. If you take a look at the frames that were actually captured, you’ll a huge increase of around 50% as well. The CPU can keep up with the GPU which allows for more frames to be captured during the benchmark. Which means you’ll have a more pleasant experience while playing. The frame time was a very good as well [25.9ms]. The GTX 670 2GB reference still suffered from the extremely high Frame times. My highest was 140ms which knocked my frames down to 5fps. So as I said above anyone running 3.8Ghz or higher Hex-cores shouldn’t really have any bottleneck issues. With that being said this benchmark is rough on graphic cards. I personally prefer to post results from actual gameplay. However, this will do for now. Feel free to compare your GPU scores and CPU speed.
 

Total War: Rome II


Prologue - The Siege of Capua @1920 x 1080:

Stock GTX 670 2GB 2-Way SLI @ 915Mhz [Boost: 1228Mhz]
X5660 @ 3.2Ghz
RAM: DDR3-1333Mhz

CPU Avg: 28%
CPU Max: 54%
Gameplay Duration: 19 minutes 46secs
23,885 Frames Captured
FPS Avg: 20fps
FPS Max: 55fps
FPS Min: 10fps
Frame time Avg: 49.6ms

I added the CPU Usage info for anyone who wanted to know the difference between stock vs overclock usage. It was pretty late when I performed this benchmark so I forget to log the CPU temperatures.

In the past I found that most games don’t rely on the CPU heavily. Most recently High-end games like BF3 for example can run fine without massive CPU overclocks as I’ve posted. Sometimes the gains aren’t worth the power and CPU voltage. Well that isn’t the case with Total War: Rome II. This game depends on the CPU a lot. A decently clocked CPU can make your experience pleasant. If your CPU is slow then obviously things won’t go over so smoothly with your eyes and key input. As you can see from my benchmark above, I didn’t have the best experience.

My Xeon is clocked @ 3.2Ghz, which then down clocks to 3.0Ghz after two cores are being used. Although I played through the Prologue with minor issues for my taste [or just hype for the game], some things can’t be ignored. Random stutter, random input lag and low FPS. Input delay and difficulties with the controls due to micro “like” stuttering. Now what makes the “Prologue” different than the “Campaign” is that there is no overworld menu. The game places you directly in a huge battle with hundreds, if not thousands of soldiers on the screen at any given time. The level is pretty large as well. Not to forget to mention that I’m playing the game 100% MAXED as I showed in the picture above. The game is gorgeous, but the gameplay wasn’t the best. 20 fps isn’t that bad when everything is moving at a steady motion. However, once there’s a lot of things happening on screen you can expect anywhere from 17-27 fps. The frame time was all over the place and was literally Spiking rapidly during my benchmark test. Two of my highest spikes were 88.8ms and 104ms! That is very unacceptable; however, it only happened twice respectfully. The experience was passable, but far from decent. Also remember than I’m running the 2GB reference GTX 670 for testing to give a fair CPU comparison. Read below for my overclocked settings.
 

Prologue - The Siege of Capua @1920 x 1080: Overclock 4.6Ghz+1600Mhz RAM
Stock GTX 670 2GB 2-Way SLI @ 915Mhz [Boost: 1228Mhz]
X5660 @ 4.6Ghz
RAM: DDR3-1600Mhz

CPU Usage Avg: 27%
CPU Usage Avg: 38%
CPU Temp: Avg: 49C
CPU Max: 54C
Room Ambient: Approx: 22C
Gameplay Duration: 17 minutes 29secs
35,112 Frames Captured
FPS Avg: 33fps
FPS Max: 77fps
FPS Min: 18fps
Frame time Avg: 29.9ms

Now that I’ve overclocked my CPU to 4.6Ghz and increased my DRAM to 1600Mhz everything is much better now. I gained 13 much needed frames per second. To make things even better, the frame rate dropped 19.7ms from 49.6ms which puts me at 29.9ms; which is very good. The experience was much better. I instantly noticed the increase in fps and smoother gameplay. There was also no input delay. Everything was smooth and the 33fps appears to be constant. The frame per second usually stayed above 40 throughout the benchmark test.

There was no rapid spiking in the frame rates and frame time during the benchmark. In the last test my frame time spiked to ridiculous number, over 100ms. Although it only happened one my overclocked settings were obviously much better. My highest frame time was 55.8ms and 74.7ms. What matters most is the average fps and frame time overall. The Prologue was very enjoyable. I had no slowdowns to steady frame times and frame rates. I’m going to use my 4.2Ghz OC settings instead of 4.6Ghz in my next test to see if there’s a big difference between 4.6Ghz and 4.2Ghz. 4.2Ghz will be easy for most overclockers to hit. 100% MAXED in a massive game like this is fine with me. I could always downgrade the graphic to Ultra instead of Extreme. However, I’m sticking with the Extreme + 100% max settings.


Campaign Mode @1920 x 1080:

Campaign – First 2 hours @1920 x 1080:
Stock GTX 670 2GB 2-Way SLI @ 915Mhz [Boost: 1228Mhz]
X5660 @ 3.2Ghz
RAM: DDR3-1333Mhz

CPU Avg: 24%
CPU Max: 100%
CPU Temp Avg: 30c
CPU Temp Max: 37c
Ambient Temp: 19C
Gameplay Duration: 1 hour 57 secs
136,535 Frames Captured
FPS Avg: 37fps
FPS Max: 80fps
FPS Min: 13fps
Frame time Avg: 26.8ms

I figured that I’d benchmark this game as I played it. I really like this game. It’s pretty deep like other RTS games. Despite all of the fun I had playing the game, there were some major issues. Before I jump right into the issues I need to distinguish the Prologue from the Campaign. The Campaign is the “story”. The prologue obviously is what leads to the Campaign\story, but what makes it different is that the Prologue has no overworld map. The Prologue places you directly in battle and the Campaign allows you to control everything with different menus. The reason I’m explaining this is because you can see that the frames per second is up 17fps in the Campaign [37fps] instead of 20fps in the Prologue. The Campaign doesn't focus solely on decisive battles. However, the overworld view is pretty demanding. Scrolling fast across the map caused my frame rate to drop extremely low [13 - 19fps]. The drop was sharp and unexpected as well. There was a bit of stutter while playing which ultimately leads to input delay.

Overall the game played much better. This was due to the areas being much smaller than the Prologue and having fewer enemies on screen. So battles were a lot better than the Prologue. The average frame times were a lot better, but could not prevent the micro stutter issues on the overworld map. The CPU average was low, but the CPU obviously wasn’t moving data quick enough. The CPU actually hit 100% during my play through. This could be a error or it could have happened while the overworld map was stuttering. That’s definitely not good, but overall the game was decent and very playable. There were some stutter issues that I could not ignore on the overworld map.
 

Campaign – 1 hour 41 minutes @1920 x 1080: Overclock 4.6Ghz+1600Mhz RAM
Stock GTX 670 2GB 2-Way SLI @ 915Mhz [Boost: 1228Mhz]
X5660 @ 4.6Ghz
RAM: DDR3-1600Mhz

CPU Usage Avg: 22%
CPU Usage Max: 37%
CPU Temp Avg: 45c
CPU Temp Max: 60c
Ambient Temp: 22c
Gameplay Duration: 1 hour 10 mins
189,477 Frames Captured
FPS Avg: 47fps
FPS Max: 89fps
FPS Min: 19fps
Frame time Avg: 21.4ms

I gained 10fps after overclocking the CPU and RAM. The frame time was much better as well. Overall the gameplay was much better. 4.2Ghz will get you 42fps. 4.6Ghz does make a difference, but 4Ghz-4.2Ghz will be fine for playing this game if you have a GTX 670 or anything near the 2GB reference specs.

Hopefully this helps those X58 users who are still wondering if the X5660 or L5639 will be worth it for gaming. Well the answer is yes it will be.
 


Crysis 3:

The real time benchmarks are games that I have played while capturing real time data. Instead of relying on a benchmark tool found in a lot of games nowadays, I play through the levels looking for micro stutter and or delays. The L5639 handled Maxed High End games like Crysis 3, Tomb Raider and Metro: LL etc. Let's see how the X5660 handle Crysis 3,


Welcome to the Jungle - 1920x1080p

This email address is being protected from spambots. You need JavaScript enabled to view it.
CPU Max Temps:58c

Stock GTX 670 2-Way SLI @ 915Mhz [Boost: 1228Mhz]
GPU 1: 83c
GPU 2: 71c

Those temps were pretty much steady throughout the benchmark and gameplay. Due to heat concerns I had to run the GPU cores @ stock\915Mhz.

FPS:
Avg: 53
Max: 136

Min: 18

As you can see not much has changed in this category from the L5639 reveiw. I actually gained 3 frames per second and the frame rate was well above 60fps throughout the level. I’m guessing it’s safe to say that 4Ghz-4.2Ghz will be fine for high end gaming with the Hex cores. I personally run with 4.6Ghz for high end games. With no more CPU bottlenecking I’ve finally hit the max on my graphic cards. I was only getting 25fps to 35fps with my This email address is being protected from spambots. You need JavaScript enabled to view it. . Two extra cores make a huge difference. The L5639 and X5660 is fine for high end gaming.

Average Frame time: 19ms

My frame times were slightly better. The game played fine on both CPUs. The higher clocked X5660 gave it a edge over the L5639. However, I’m sure if both CPUs were clocked at the same speed the difference wouldn’t matter. just as they do not now. So i7-920 to i7-960 and pretty much all Bloomfield users will see a tremendous upgrade. Gaming wise there is no comparisons.
 


Battlefield 3

BF3 is one of the most gorgeous FPS I’ve played. There are a lot of great looking games. EA isn’t my favorite company, but they definitely invest in their studios. It was requested to be benchmarked. I also have to thank PontiacGTX for allowing me to benchmark this game on Origin. It took me 2 hours to download 20.3GBs of data. It was worth it. I ran the benchmarks in 1600p, 1080p and 720p. There are a lot of charts, but I’m not going to post the charts. It’s way too many charts.





All benchmarks were tested with Max Settings. My GPUs were at stock settings.
GTX 670 2GB 2-Way SLI @ 915Mhz.
[Boost: 1228Mhz]

While playing @ 1600p
X5660 @ 4.6Ghz
CPU: Max: 52c
GPU 1: Avg. 70c - Max: 76c
GPU 2: Avg. 65c - Max: 72c

Semper Fidelis [Campaign] @2560 x 1600p:

Gameplay Duration: 3 minutes 21 secs
Captured 14,690 frames
FPS Avg: 73fps
FPS Max: 110fps
FPS Min: 30fps
Frame time Avg: 13.7ms

The game plays great at 1600p. No micro stuttering at all. The input lag and everything was smooth. My highest frame time was 33.0ms, which is no problem at all to me. The Average was 13.7ms which is great. The game is still gorgeous and will be for a very long time. This was level short so make your judgment from the other benchmarks.
 

Operation Swordbreaker [Campaign] @ 2560 x 1600p:
Gameplay Duration: 26 minutes 25 secs
Captured 123,237 frames
FPS Avg: 78fps
FPS Max: 120fps
FPS Min: 34fps
Frame time Avg: 12.9ms


Caspian Border [Multiplayer – Conquest 32v32] @ 2560 x 1600p:
Gameplay Duration: 23 minutes 29 secs
Captured 95,475 frames
FPS Avg: 66fps
FPS Max: 106fps
FPS Min: 34fps
Frame time Avg: 15.2ms
 

Operation Metro [Multiplayer – Conquest 32v32] @ 2560 x 1600p:
Gameplay Duration: 13 minutes 1 secs
Captured 58,831 frames
FPS Avg: 75fps
FPS Max: 112fps
FPS Min: 41fps
Frame time Avg: 13.3ms

Noshahr Canals [Multiplayer TDM 32v32] @ 1920x1080p:
Gameplay Duration: 21 minutes 49 secs
Captured 185,190 frames
FPS Avg: 141fps
FPS Max: 201fps
FPS Min: 76fps
Frame time Avg: 7.07ms

Operation Metro [Multiplayer – Conquest 32v32] @ 1280x720:
Gameplay Duration: 25 minutes 50 secs
Captured 265,517 frames
FPS Avg: 171fps
FPS Max: 224fps
FPS Min: 90fps
Frame time Avg: 5.86ms

Now as you can see the GTX 670 2GB SLI manhandles the Frostbite 2 engine. I’ll probably get around to testing the Frostbite 3 engine if I ever get man hands on the game. I will be posting more Real Time Benchmarks soon.
----

I found something interesting while play BF3 @ stock clocks with DDR3-1333Mhz.

Operation Swordbreaker [Campaign] @ 2560 x 1600p
[Stock Clocks+ DDR3-1333Mhz]:

CPU: Max: 36C
Gameplay Duration: 22 minutes 57 secs
100,692 frames captured
FPS Avg: 73fps
FPS Max: 115fps
FPS Min: 42fps
Frame time Avg: 13.7ms

Running with stock CPU settings – 3.0Ghz [x23] - 3.2Ghz [x24] + DDR3-1333Mhz – there are minor differences when running my PC @4.6Ghz+1600Mhz. From my stock test it appears that most games won’t require the 3.8Ghz+ overclock in order to enjoy games in Ultra-high resolution modes [16:10 -2560x1600p]. I had no micro stutter or issues at all. 2560x1600p played like a champ @ 100% maxed out-Ultra Settings 1600p. The X5660 CPU continues to impress me. I only lost approximately 5fps from the 4.6Ghz-DDR3-1600Mhz overclock. The RAW numbers don’t lie. My frame time only increased 0.8ms which is no problem at all. My CPU max was only 36C and the room ambient temp was 24c. This is very good to know and I will continue to test the stock clocks vs the overclock stocks during my real time gaming benchmarks. Not for every benchmark, but for high end games like Crysis 3 100% maxed @ 1080p. I have yet to play multilayer with stock settings. There isn't a huge difference in the Campaign thus far.



GTA: IV





The Cousins Bellic + It's Your Call Missions @2560x1600p:

Stock GTX 670 2GB 2-Way SLI @ 915Mhz [Boost: 1228Mhz]
X5660 @ 3.2Ghz
RAM: DDR3-1333Mhz


Gameplay Duration: 27 minutes 4 secs
Captured 78,263 frames
FPS Avg: 48fps
FPS Max: 72fps
FPS Min: 30fps
Frame time Avg: 20.8ms

Using the latest high-end mods makes GTA IV looks fantastic. I’m running the game maxed out + mods. I’m also running all of the unrestricted command lines. The Frame rate is decent and the frame time was great. The game was a little choppy at first, but got better as I continued to play. The game was very playable. It all depends on the area. Some areas will be a bit choppy along with input delay. Other areas are perfectly fine.The fps were as steady as I would like personally, but that isn’t a bad thing. I did not use the V-sync option, but I’m sure it would help. The CPU Usage Average was 30%. However, this all changed when I overclocked my CPU [read below].


Three's a Crowd + First Date Missions @ 2560x1600p:

Stock GTX 670 2GB 2-Way SLI @ 915Mhz [Boost: 1228Mhz]
This email address is being protected from spambots. You need JavaScript enabled to view it.
RAM: DDR3-1600Mhz


Gameplay Duration: 23 minutes 4 secs
Captured 76,908 frames
FPS Avg: 55fps
FPS Max: 94fps
FPS Min: 25fps
Frame time Avg: 18.6ms

After running my overclock settings while increasing my RAM from 1333Mhz to 1600Mhz, GTA IV with mods is much more playable. Not only did I gain 7 frames per second and lowering my frame time average to 18.6ms, the game was very playable. The frame rate was steady. I never benchmark with V-sync for obvious reasons, but with my overclocked X5660 I didn’t need to use V-sync to keep the frames steady. If I were to overclock my GPU and boost to1241Mhz I’m sure I could get well over 60fps. I’m just running stock clocks to keep the heat low and to give everyone an example of stock speeds. There was no input lag and no latency issues. I believe a minor overclock to 3.8Ghz-4.2Ghz would be fine. The CPU Usage Average was on 15% while playing.

 

 

 


 

 

 

 

 

 

 

 

 

Westmere-EP - X5660 Full Review

X58 DDR3- 1600Mhz vs 1900Mhz vs 2000Mhz Performance % Comparisons

I’ve dedicated a large portion of my weekend to these benches so please bear with me. I've done as much as I could. Also big thanks to kpforce1 for the help with these benchmarks

 


   It’s been proven that faster RAM [1600+] can lead to better scores in several programs. This is more likely on newer architectures [Post-X58]. It’s possible on the X58 platform, but you’ll nearly destroy the CPU, MB or the RAM attempting to emulate the scores you see on X79. Or you could potentially do some damage and cause some major degrading. I tried to be as conservative as I could with the voltages and focused around 1900Mhz - 2000Mhz. I probably could have and probable should have used more voltage; I’m not risking my build for faster RAM frequency. That's to big of a risk for me. My board is only rated at 1866Mhz.

   However, I’ve always read different views on the benefits for gaming & CPU benchmarks. So I’ve taken many hours to compare RAM frequencies. We all know that the GPU and CPU matters, but how much does RAM matter on the X58 platform. The fastest frequency I could achieve was 2200Mhz. The problem is that it required a lot of voltage to POST and run. The other problem is that these voltages affect the CPU and the RAM the most. Therefore I’m running only 1600Mhz – 2000Mhz. I’m focusing mostly on the gaming side. Remember that my board is only rated for 1866Mhz so hopefully this barrier didn’t influence my scores that much. I also attempted to show as many scores for drivers “331.82” & “337.50” with 1600Mhz, 1900Mhz and 2000Mhz in the chart. Below I'm going to compare the core speed+RAM frequency performance percentage.


Here is the list of the benchmarks that I’ve ran:

Kana Maru X5660 RAM Benchmarks

Here are the breakdowns from my 1600Mhz, 1900Mhz and 2000Mhz scores [all graphic and physx drivers.] I will be taking the highest scores and comparing them from the results [increase or decrease%].

3DMark 11 – Performance % Results:
4Ghz-1900Mhz = +0.72%
4Ghz - 1600Mhz = 0.0%'

4.6Ghz - 2000Mhz =+ 1.9%
4.6Ghz - 1600Mhz = 0.0%

4.8Ghz - 2000Mhz = +0.93%
4.8Ghz - 1600Mhz = 0.0%
--

3DMark 11 - Extreme % Results:
4Ghz-1900Mhz = +0.59%
4Ghz - 1600Mhz = 0.0%

4.8Ghz-1900Mhz = +0.29%
4.8Ghz - 1600Mhz = 0.0%
--

3D Mark Fire Strike – Performance:
4Ghz-1900Mhz = +0.29%
4Ghz - 1600Mhz = 0.0%

4.6Ghz - 2000Mhz = +0.94%
4.6Ghz- 1900Mhz = +0.62%
4.6Ghz - 1600Mhz = 0.0%

4.8Ghz - 2000Mhz = +0.18%
4.8Ghz - 1600Mhz = 0.0%
--

3D Mark Ice Storm - Performance:
4.6Ghz - 1600Mhz = +1.07%
4.6Ghz - 2000Mhz = 0.0%

4.8Ghz - 2000Mhz = +3.30%
4.8Ghz -1600Mhz = 0.0%
--

Tomb Raider 100% Maxed 1080p - Avg. FPS
4.8Ghz - 2000Mhz = +1.88%
4.8Ghz - 1600Mhz = 0.0%

--

Tomb Raider 100% Maxed 1600p - Avg. FPS
4.8Ghz - 2000Mhz = +0.88%
4.8Ghz - 1600Mhz = 0.0%

--

Battlefield 4 100% Maxed 1600p – Avg. FPS
4.6Ghz - 1600Mhz = +7.5%
4.8Ghz - 2000Mhz = 0.0%
--

Crysis 3
4.6Ghz - 1600Mhz = +5.87%
4.8Ghz - 2000Mhz = 0.0%

Here are some high results I didn't think I'd ever get



So are RAM modules running Frequencies higher than 1600Mhz worth it?


The answer is yes and no depending on a few things. For the X58 platform 1600Mhz – 1866Mhz should be more than enough and has been for several years now. Now if you see 1600Mhz & 2000Mhz kit around the same price and your motherboard supports the frequency; then it might not be a bad decision to go with the higher frequency. If the 2000Mhz support a CAS of 7 or 8, I’d go with the 2000Mhz modules instead of the 1600Mhz personally. RAM timings are another thing to consider. Personally I would go with 1600Mhz RAM due to the fact that you can tighten the timings much better and usually get a lower CAS.

As far as overclocking goes 1600Mhz gives me a bit more flexibility when overclocking. My board [or probably the X58 platform] is limiting me what I can achieve with 2000Mhz RAM. I believe I could get more out of 2000Mhz RAM, but the potential consequences aren’t worth it. This is my only build and I don’t know what I’d do without my 1st Generation Beast. I actually broke the some of my old records so I’m glad I ran these test again with the latest drivers. The performance increases are pretty minor across the board when going from 1600Mhz to 2000Mhz. However some of the numbers look much better with 2000Mhz. I actually hit 16767 in 3D Mark 11, not quite 17K that I'd like to hit [but that's close enough for me:thumb:]. I also finally broke 200,000 in 3Dmark Ice Storm with 1600Mhz RAM, 2000Mhz just wasn't doing it. Real-Time Benchmarks™ shows that there isn't a big difference I'm able to tighten the times on 1600Mhz and it shows when compared to 1900Mhz - 2000Mhz RAM in some of the benchmarks. Thanks for reading. I'll update as I continue to benchmark.


 

 

 

Westmere-EP - X5660 Full Review

List of motherboards compatible with Xeon 56xx

  Here is a list of compatible motherbaords that can use the latest X56xx Xeons. Be sure to update to the latest BIOS. Some motherboards requires modded BIOS. Check prior to installing new Xeons.

  • Asus P6T
  • Asus P6TD DELUXE
  • Asus P6T DELUXE V2
  • Asus P6T6 WS Revolution
  • Asus P6T7 WS SuperComputer
  • Asus P6X58D-E
  • Asus P6X58D PREMIUM
  • Asus Sabertooth X58
  • Asus Rampage II GENE
  • Asus Rampage II Extreme
  • Asus Rampage III GENE
  • Asus Rampage III Formula
  • Asus Rampage III Extreme
  • Asus Z8NA-D6C
  • Gigabyte GA-EX58-UD5
  • Gigabyte GA-X58A-UD3R Rev. 1.0
  • Gigabyte GA-X58A-UD3R Rev. 2
  • Gigabyte X58A-OC
  • Gigabyte X58A-UD7
  • Gigabyte GA-X58-USB3
  • EVGA EVGA X58 Classified 4-Way SLI
  • EVGA X58 SLI Classified
  • EVGA X58 SLI FTW3
  • EVGA X58 3X SLI [132-BL-E758]
  • EVGA 131-GT-E767
  • EVGA 141-BL-E757
  • EVGA SR-2 Classified
  • SAPPHIRE PURE BLACK X58
  • MSI Big Bang-XPower (MS-7666)
  • Hewlett-Packard 0B54h
  • Foxconn BLOODRAGE
  • Mac-F221BEC8



 























Archived Comments


santi2104 - 4\8\2018:
Hi guys IDK if anyone is active here BUT i have problem to run X5660 on GA-EX58-UD3R rev. 1.6 MOBO. PC can't post, doing boot loop etc. Done all that stuff with clearing BIOS, different versions ETC. Any ideas or modded Bioses would be appreciated.

Jose Carlos Gadea - 5\9\2016:
I'm about to build a server with a Rampage II Extreme and a X5660, and 24GB of DDR3-1600. I was going to settle with a 920, but having seen this, the little extra cash is more than worth it.

santi2104 - 6\30\2015:
Awesome job man, i have a xeon x5650 that i use with a rampage 2 gene, it runs great with the latest bios, and i can lock the multyplier to x22 in all the cores, i also tested it with a msi x58 pro-e with the latest bios, and it worked great as long as i didnt overclock it that much, it was ok up to 180 Bclk, more than that and it was very unstable, more than 190 it wouldnt boot, even with 1.55 vcore, and the motherboard didnt have neither load line calibration nor fixed voltage, it was a pain in the ass, withe the rampage 2 gene it runs at 4.2ghz in all six cores with 1.272 vcore, and i got up to 5.1ghz with 1.54 vcore, awesome cpu for 70 us dollar

siapaajabisa - 3\13\2015:
Nice writing. I'm upgrading my 920 to X5650 on GIGABYTE GA-EX58-UD4 (F11T beta BIOS). Cost me 1.3 million Rupiah (IDR) or ± US$ 98.5. I'm so happy with this upgrade. Electricity bills are going to reduce a bit, and game performance are better than 920 (both using stock clock). Most noticeable performance upgrade are when i extracting some compressed files (mostly winRAR with size up to 10 GB). Only problem is i can't get my xeon next to 4 GHz. Gotta check the Internet for some cheat sheet though.
Adriano Fienga - 6\21\2016
  Please, is it working the X5650 on your UD4 Mobo? Thinking to do the same.. Thanks?
   siapaajabisa - 6\21\2016
   Hi Adriano. It's working well on my UD4. Just update the BIOS first to the latest.
     Adriano Fienga - 6\22\2016
     Thanks a lot, running latest F11T Bios and just ordered a x5650 for 80$, this will give new life to my rig :)
      siapaajabisa - 6\22\2016
       Cheers! That $80 is really worth it. Don't forget that you could use up to 48 gigs of RAM with this Xeon :)

Aditya Raja Gummadavelly - 2\6\2015:
Great read and it's interesting how newer generation processors aren't that far ahead. I think I'll still be rocking on x58, but my i7 920 is showing its age. A hexacore for $100 off eBay will be a very healthy upgrade.

Stephen Kunkel - 2\5\2015:
This is some excellent work. Thanks for taking the time.

lmimmfn - 1\14\2015:
Just to note in your mobos, theres 2 revisions for Gigabyte GA-EX58-UD5, officially Rev 2 supports the Xeons, but i have Rev 1 and it works fine also even though its not listed as supported on the Gigabyte site, cant overclock unless you set the ram voltage to 1.65v though. I got a 5650 and cranked it upto 4.4Ghz within a few hours. Oh nice review btw :