workstation – BabelTechReviews https://babeltechreviews.com Tech News & Reviews Tue, 13 Dec 2022 05:05:31 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://babeltechreviews.com/wp-content/uploads/2023/03/BTR-logo-blue-square.svg workstation – BabelTechReviews https://babeltechreviews.com 32 32 The Hellhound RX 7900 XTX Takes on the RTX 4080 with 50 VR & PC Games https://babeltechreviews.com/hellhound-rx-7900-xtx-vs-rtx-4080-50-games-vr/ Tue, 13 Dec 2022 05:05:31 +0000 /?p=29183 Read more]]> The Hellhound RX 7900 XTX takes on the RTX 4080 in more than 50 VR & PC Games , GPGPU & SPEC Workstation Benchmarks

The $999 Hellhound RX 7900 XTX arrived at BTR for evaluation last week from PowerColor. We have been comparing it against Nvidia’s new $1199 RTX 4080 Founders Edition (FE) and $1599 RTX 4090 FE plus five additional top cards. We focus on raw performance by benchmarking 42 PC and 10 VR games, GPGPU, workstation, SPEC, and synthetic benchmarks.

We will also compare the performance of these three new competing cards with the RX 6900 XT and RX 6800 XT reference editions and their competitors, the RTX 3080 Ti and RTX 3080 FE.

Features & Specifications

Although launched at reference $999 XTX pricing, the Hellhound RX 7900 XTX has its factory Game Clock set 30MHz higher than the reference version’s 2300MHz. According to PowerColor specifications, the Hellhound RX 7900 XTX can boost its Game Clock to 2330MHz (2270MHz Silent) with the OC BIOS. The Game Clock is the expected GPU clock while running average high-load gaming scenarios with a regular non-overclocked total graphics usage situation. However, the GPU Boost Clock can reach as high as 2525MHz – 25MHz higher than reference – by using the OC BIOS and we will test this.

Here are the Hellhound RX 7900 XTX features.

Source: PowerColor

Additional Information from PowerColor

  • The Hellhound has 2 modes, OC and Silent with a BIOS switch on the side of the card. Even on performance mode it’s said to be considerably quieter than reference board and the silent mode is indeed very quiet.
  • The 14 layer high TG PCB board has 12+3+2+2+1 Phase VRM design. Hellhounds are over-spec’d in order to deliver the best stability and overclocking headroom. By having high quality VRMs, it will run cooler and last longer.
  • DrMos and high-polymer Caps are used without compromise.
  • The cooler features three 9-blade ball bearing fans with 8 heat pipes (8X6?) across a high density heatsink with a copper base. The PCB is shorter than the cooler.
  • It uses mute fan technology and the fans stop under 60C.
  • The Hellhound RX 7900 XTX includes card stands for supporting it so as to not put extra strain on the PCIe slot.

The RX 7900 XTX is AMD’s brand new RDNA 3 flagship card, and the Hellhound represents one of the best choices for a mildly factory overclocked $999 card by virtue of its high-quality components and carefully selected GPUs coupled with good support and great warranty service.

The Test Bed

We benchmark using FCAT VR and FrameView on Windows 11 Pro Edition 2H22 with Intel’s Core i9-13900KF, and 32GB of T-Force Delta RGB 6400MHz CL40 DDR5 2x16GB memory on an ASUS Prime-A Wi-Fi Z790 motherboard with fast SSD storage. All games and benchmarks are patched to their latest versions, and we use recent drivers.

First, let’s take a closer look at the new PowerColor Hellhound RX 7900 XTX.

A Closer Look at the PowerColor Hellhound RX 7900 XTX

Although the Hellhound RX 7900 XTX advertises itself as a premium 24GB card which features ray tracing, Radeon Boost, and Anti-Lag, the cover of the box uses almost no text in favor of stylized imagery.

The back of the box touts key features which include ray tracing, Anti-Lag, DisplayPort 2.1, RDNA 3, FidelityFX, Infinity Cache, streaming aids, and Boost, as well as states its 800W power and system requirements. There is no mention of VR Ready Premium. Also highlighted are PowerColor’s custom cooling solution, Dual-BIOSes, fan improvements, and output LEDs. The default LED color is an eye-pleasing amethyst.

We open the box and note there are parts for a card stand.

The complete package contents except for the anti-static bag are pictured above together with the card holder parts. Above the stand is fully assembled. Although the Hellhound is relatively heavy, it is not 4090-heavy, and we didn’t feel a need for it.

The Hellhound RX 7900 XTX is a large tri-fan card in a three slot design which is quite handsome with PowerColor’s neutral colors and even more striking with the LED on.

Turning it over we see a sturdy backplate featuring the Hellhound logo which also lights up with amethyst being the default color.

Looking at either long edge, we see the entire PCB is covered by heatpipes and heatsink fins. Additional power is provided by the PSU’s 2 x 8-pin Molex cables to the card connectors. There is also a switch to choose between the default overclock (OC) BIOS and the Silent BIOS. We didn’t bother using the Silent BIOS as the card is really quiet anyway, but it is good to have in case a flash goes bad.

The card should perhaps be locked down with two thumbscrews instead of one because it is heavy or the stand can be used.

The Hellhound’s IO panel connectors include 3 DisplayPorts and 1 HDMI connection.

Below is the other end which is very plain.

The Hellhound RX 7900 XTX looks great inside a case.

The specifications look good and the card itself looks solid. Now let’s check out its performance after we look over our test configuration and more on the next page.

Test Configuration

Test Configuration – Hardware

  • Intel Core i9-13900KF (HyperThreading and Turbo boost at stock settings)
  • ASUS Prime-A Z790 LGA1700 motherboard (Intel Z790 chipset, latest BIOS, PCIe 5.0, DDR5)
  • T-Force Delta RGB PC5-51200 6400MHz DDR5 CL40 2x16GB kit, supplied by TeamGroup
  • Valve Index, 90Hz / 100% SteamVR Render Resolution
  • Hellhound RX 7900 XTX, 24GB, factory clocks, supplied by PowerColor
  • RTX 4080 16GB Founders Edition, stock clocks, supplied by Nvidia
  • RTX 4090 24 GB Founders Edition, stock clocks, supplied by Nvidia
  • Gigabyte RX 6900 XT GAMING OC, 16GB, factory clocks
  • RX 6800 XT Reference 16GB, factory clocks, supplied by AMD
  • RTX 3080 Ti 12GB Founders Edition, stock clocks, supplied by Nvidia
  • RTX 3080 10 GB Founders Edition, stock clocks, supplied by Nvidia
  • 2 x 2TB T-Force Cardea Ceramic C440 (5,000/4,400MB/s) PCIe Gen 4 x4 NVMe SSDs (one for AMD/one for Nvidia)
  • T-Force M200 4TB USB 3.2 Gen2x2 Type-C external SSD (2,000x2000B/s), supplied by TeamGroup
  • Super Flower LedEx, 1200W Platinum 80+ power supply unit
  • MSI MAG Series CORELIQUID 360R (AIO) 360mm liquid CPU cooler
  • Corsair 5000D ATX mid-tower (plus 1 x 140mm fan & 2 x 120mm Noctua fans)
  • BenQ EW3270U 32? 4K HDR 60Hz
  • LG C1 48″ 4K OLED HDR 120Hz display

Test Configuration – Software

  • GeForce 526.98 drivers for the RTX 4090/4080 and 527.27 for the RTX 3080/3080 Ti. Adrenalin 22.11.2 for the RX 6800 XT and 6900 XT, and press drivers for the RTX 7900 XTX.
  • High Quality, prefer maximum performance, single display, set in the Nvidia control panel.
  • High Quality textures, all optimizations off in the Adrenalin control panel
  • VSync is off in the control panel and disabled for each game
  • AA enabled as noted in games; all in-game settings are Ultra Preset or highest with 16xAF always applied – no upscaling is used
  • Highest quality sound (stereo) used in all games
  • All games have been patched to their latest versions
  • VR charts use frametimes in ms where lower is better, but we also compare “unconstrained framerates” which shows what a video card could deliver (headroom; higher is better)
  • Windows 11 Pro edition; 22H2 recent clean install for GeForce and Radeon cards using separate but identical NVMe SSDs.
  • Latest DirectX
  • SteamVR latest beta

Games

Vulkan

  • Sniper Elite
  • DOOM Eternal
  • Red Dead Redemption 2
  • World War Z
  • Strange Brigade
  • Rainbow Six: Siege

DX12

  • A Plague Tale: Requiem
  • Spiderman: Remastered
  • F1 2022
  • Ghostwire: Tokyo
  • Elden Ring
  • God of War
  • Dying Light 2
  • Forza Horizon 5
  • Call of Duty: Vanguard
  • Marvel’s Guardians of the Galaxy
  • Far Cry 6
  • DEATHLOOP
  • Chernobylite
  • Resident Evil Village
  • Metro Exodus Enhanced Edition
  • Hitman 3
  • Godfall
  • DiRT 5
  • Assassin’s Creed Valhalla
  • Cyberpunk 2077
  • Watch Dogs: Legions
  • Horizon Zero Dawn
  • Death Stranding
  • Borderlands 3
  • Tom Clancy’s The Division 2
  • Civilization VI – Gathering Storm Expansion
  • Battlefield V
  • Shadow of the Tomb Raider

DX11

  • Overwatch 2
  • Total War: Warhammer III
  • Days Gone
  • Crysis Remastered
  • Destiny 2 Shadowkeep
  • Total War: Three Kingdoms
  • Grand Theft Auto V

VR Games

  • Assetto Corsa: Competizione
  • Elite Dangerous
  • F1 2022
  • Kayak Mirage
  • Moss: Book II
  • No Man’s Sky
  • Project CARS 2
  • Skyrim
  • Sniper Elite
  • The Walking Dead: Saints & Sinners

Synthetic

  • Time Spy & Time Spy Extreme (DX12)
  • 3DMark FireStrike – Ultra & Extreme
  • Superposition
  • VRMark Blue Room
  • AIDA64 GPGPU benchmarks
  • Blender 3.3.0 benchmark
  • Geekbench
  • Sandra 2020 GPGPU Benchmarks
  • SPECworkstation3
  • SPECviewperfect 2020
  • FCAT VR benching tool
  • OpenVR Benchmark tool

Adrenalin Control Panel settings

Here are the Adrenalin Control Panel settings.

NVIDIA Control Panel settings

Here are the NVIDIA Control Panel settings.

Overclocking, temperatures and noise

We spent little time overclocking the Hellhound RX 7900 XTX for this review as we encountered some unexpected results that require further investigation. The card is very quiet and its fans never spin up even under a heavy load so as to be irritating or even noticeable. It’s quieter than the Gigabyte 6900 XT or the RTX 3080 Ti.

The Hellhound RX 7900 XTX is factory clocked 30MHz higher than the reference version at 2330MHz using the OC BIOS. According to its specifications, the Hellhound boost can clock up to 2565MHz out of the box. From our benching, we generally see it boosting even higher and it generally settles in above 2750MHz with peaks above 2780MHz.

The Hellhound temperatures stay in the low to mid-60s C with the fans quietly running well below 50% even using the OC BIOS under a full gaming load. It is an exceptionally well-cooled and quiet card.

Let’s head to the performance charts to compare the performance of the Hellhound RX 7900 XTX with six other cards.

The Hellhound RX 7900 XTX vs. the RTX 4080 FE and 5 other cards benchmarked with 42 games

Here are the performance results of 42 games and 3 synthetic tests. The highest settings are used and are listed on the charts. The benches were run at 2560×1440 and 3840×2160. Click on each chart to open in a pop-up for best viewing. Gaming results show average framerates in bold text, and higher is better. Minimum framerates are next to the averages in italics and in a slightly smaller font which represent a game’s average 1% lows (99th percentiles).

The first set of charts show the seven main competing cards. Column two represents the $999 Hellhound RX 7900 XTX performance in between the $1599 RTX 4090 FE in column one and the RTX 4080 FE, its $1199 primary competitor, in the third column. The RTX 3080 Ti results are in the fourth column next to Gigabyte RX 6900 XT OC version performance results in the fifth column, followed up by the RTX 3080 in the sixth and the RX 6800 XT in the seventh column.

“Wins” between the RX 7900 XTX and the RTX 4080 are denoted by yellow text. If there is a tie, both values are in yellow.

Playing with the RX 7900 XTX, Elden Ring locked up the PC even after verifying files and reinstalling Adrenaline drivers and it appears a driver issue prevented ray traced Guardians of the Galaxy running on the RX 6800 XT.

The Hellhound RX 7900 XTX and the RTX 4080 and RTX 4090 are cards that are primarily suited for 4K and high-FPS 1440P gaming and they stand out from the other four cards. The RX 7900 XTX trades blows with the RTX 4080 in rasterized games – they are equivalent cards if ray tracing is not considered.

Although RX 7900 XTX ray tracing has greatly improved over the RX 6900 XT and RX 6800 XT, it now appears to perform similarly to the RTX 3080 and RTX 3080 Ti but far behind the RTX 4080. FSR 2.0, although still not on the same image quality level as Nvidia’s DLSS 2, will almost double framerates for a very minor IQ hit and will make most of the games quite playable at Ultra/4K in this 52 game benching suite. Gamers who are not so impressed with ray tracing or who are not picky about image quality perfection may well prefer to save $200 on a $1000 Hellhound RX 7900 XTX over buying a $1200 RTX 4080.

Let’s look at synthetic benches.

Synthetic benches

We hold synthetic benches to be meaningless for predicting real world gaming performance versus competing cards with different architectures although they have other practical uses like overclocking and ranking. The RX 7900 XTX performs better in the synthetic tests than in gaming.

Let’s see how the Hellhound performs in ten popular VR (Virtual Reality) games next.

10 VR Games

For this review, we benchmarked the Valve Index using FCAT VR and set the SteamVR render resolution to 100% (2016×2240) which uses a factor of 1.4X (the native resolution is 1440×1600) to compensate for lens distortion and to increase clarity. We are going to compare the performance of the RX 7900 XTX with the RX 4080 and versus the RX 4090 at each game’s Ultra/Highest settings.

Unfortunately, FCAT VR still doesn’t work with MS Flight Simulator 2020 or with Star Wars Squadrons. Here are the ten VR games we tested.

VR Games

  • Assetto Corsa: Competizione
  • Elite Dangerous
  • F1 2022
  • Kayak Mirage
  • Moss: Book II
  • No Man’s Sky
  • Project CARS 2
  • Skyrim
  • Sniper Elite
  • The Walking Dead: Saints & Sinners

Synthetic

  • Time Spy & Time Spy Extreme (DX12)
  • 3DMark FireStrike – Ultra & Extreme
  • Superposition
  • VRMark Blue Room

IMPORTANT: BTR’s charts use frametimes in ms where lower is better, but we also compare “unconstrained framerates” which shows what a video card could deliver (headroom) if it wasn’t locked to either 90 FPS or to 45 FPS by the HMD. In the case of unconstrained FPS, measuring just one important performance metric, faster is better.

Let’s individually look at our 10 sim-heavy VR games’ performance using FCAT VR.

First up, Assetto Corsa: Competizione.

Assetto Corsa: Competizione (ACC)

BTR’s sim/racing editor, Sean Kaldahl created the replay benchmark run that we use for both the pancake game and the VR game. It is run at night with 20 cars, lots of geometry, and the lighting effects of the headlights, tail lights, and everything around the track looks spectacular.

Just like with Project CARS, you can save a replay after a race. Fortunately, the CPU usage is the same between a race and its replay so it is a reasonably accurate benchmark using the Circuit de Spa-Francorchamps. iRacing may be more accurate or realistic, but Assetto Corsa: Competizione has some appeal because it feels more real than many other racing sims. It delivers the sensation of handling a highly-tuned racing machine driven to its edge.

Here are the ACC FCAT VR frametimes using VR Ultra using the Hellhound RX 7900 XTX, the RTX 4080 FE, and the RTX 4090 FE.

Here are the details are reported by FCAT-VR:

The RX 7900 XTX managed 85.77 unconstrained FPS with 6339 (50%) synthesized frames with no dropped frames nor Warp misses.

The RTX 4080 delivered 118.42 unconstrained FPS with 207 (2%) synthesized frames with 1 dropped frame and 1 Warp miss.

The RTX 4090 achieved 164.03 unconstrained FPS together with 1 synthetic frame but with no dropped frames nor Warp misses.

The ACC racing experience is best with the RTX 4090 although the RTX 4080 delivers a nearly constant 90 FPS on the Epic VR preset unlike the RX 7900 XTX which requires one-half of its frames to be synthesized.

Next, we check out Elite Dangerous.

Elite Dangerous (ED)

Elite Dangerous is a popular space sim built using the COBRA engine. It is hard to find a repeatable benchmark outside of the training missions.

A player will probably spend a lot of time piloting his space cruiser while completing a multitude of tasks as well as visiting space stations and orbiting a multitude of different planets. Elite Dangerous is also co-op and multiplayer with a dedicated following of players.

We picked the Ultra Preset and we set the Field of View to its maximum.

Here are the frametimes.

Here are the details as reported by FCAT-VR:

The RX 7900 XTX managed 185.21 unconstrained FPS with no synthesized frames with no dropped frames or Warp misses.

The RTX 4080 delivered 230.98 unconstrained FPS with 1 synthesized frame and 1 dropped frame and 1 Warp miss.

The RTX 4090 brings 296.16 unconstrained FPS together with 2 synthetic frames but with 2 dropped frames and 2 Warp misses.

Although the Hellhound RX 7900 XTX has the lowest performance, the experience playing Elite Dangerous at Ultra settings is not perceptibly different on any tested video card. However, the RTX 4090 has a lot more performance headroom to increase the render resolution or to use a higher resolution headset like the Reverb G2 or the Vive Pro 2.

Let’s look at our newest VR sim, F1 2022.

F1 2022

Codemasters has captured the entire Formula 1 2021 season racing in F1 2022, and the VR immersion is good. The graphics are customizeable and solid, handling and physics are good, the AI is acceptable, the scenery is outstanding, and the experience ticks many of the necessary boxes for a racing sim.

Here is the frametime plot for F1 2022.

Here are the details as reported by FCAT-VR.

The RX 7900 XTX delivered 156.57 unconstrained FPS with 6 synthesized but no dropped frames nor Warp misses.

The RTX 4080 achieved 200.24 unconstrained FPS with no synthesized or dropped frames nor Warp misses.

The RTX 4090 delivered 254.72 unconstrained FPS together with 3 synthetic frames plus with 3 dropped frames and 3 Warp misses.

The experience playing F1 2022 using the Ultra preset is not very different on any of these video cards but the RTX 4090 and RTX 4080 have considerably more performance headroom than the RX 7900 XTX to use 120Hz/144Hz or to use a higher resolution headset.

Kayak VR: Mirage

The outstanding near-photorealistic visual fidelity really sets Kayak VR: Mirage apart from other simulators. It boasts a wide range of locales with day/night/sunset options offering tropical, icy, desert, and even stormy scenarios with trips to Costa Rica, Antarctica, Norway, and Australia and occasional interactions with wildlife. It can be played as a relaxing sim or as a strenuous workout with competitive time trials which offer asynchronous multiplayer and ranking on global leaderboards.

We benchmark at 100% resolution with the highest “Cinematic” in-game settings but do not use DLSS or FSR.

Here is the frametime plot for Kayak VR: Mirage.

Here are the FCAT-VR details.

The RX 7900 XTX delivered 198.98 unconstrained FPS with no synthesized frames or dropped frames nor Warp misses.

The RTX 4080 delivered 257.16 unconstrained FPS with 1 synthesized and 1 dropped frame and 1 Warp miss.

The RTX 4090 got 329.35 unconstrained FPS together with 1 synthetic frame and 1 dropped frame plus 1 Warp miss.

Kayak VR: Mirage looks fantastic at 100% resolution with maximum settings and would be well-suited for play on the Reverb G2 with any of our test cards.

Next, we look at Moss: Book II.

Moss: Book II

Moss: Book II is an amazing VR experience with much better graphics than the original game. It’s a 3rd person puzzle adventure game played seated that offers a direct physical interaction between you (the Reader) and your avatar, Quill, a mouse that bring real depth to the story. Extreme attention has been paid to the tiniest details with overall great art composition and outstanding lighting that make this game a must-play for gamers of all ages.

Moss II boasts very good visuals and we use the in-game highest settings.

Here are the frametimes plots of our four cards.

Here are the details are reported by FCAT-VR:

The RX 7900 XTX delivered 189.29 unconstrained FPS with no synthesized or dropped frames nor Warp misses.

The RTX 4080 delivered 308.44 unconstrained FPS with 1 synthetic and 1 dropped frame and 1 Warp miss.

The RTX 4090 achieved 436.34 unconstrained FPS no synthesized or dropped frames nor Warp misses.

Unfortunately, the experience playing Moss II on the Valve Index using the RX 7900 XTX is marred by visual issues including artifacting and shimmering.

Next, we will check out another demanding VR game, No Man’s Sky.

No Man’s Sky (NMS)

No Man’s Sky is an action-adventure survival single and multiplayer game that emphasizes survival, exploration, fighting, and trading. It is set in a procedurally generated deterministic open universe, which includes over 18 quintillion unique planets using its own custom game engine.

The player takes the role of a Traveller in an uncharted universe by starting on a random planet with a damaged spacecraft equipped with only a jetpack-equipped exosuit and a versatile multi-tool that can also be used for defense. The player is encouraged to find resources to repair his spacecraft allowing for intra- and inter-planetary travel, and to interact with other players.

Here is the No Man’s Sky frametime plot. We set the settings to Maximum which is a step over Ultra including setting the anisotropic filtering to 16X and upgrading to FXAA. We did not use any upscaling method.

Here are the FCAT-VR details of our comparative runs.

The RX 7900 XTX brought 108.17 unconstrained FPS with 3536 (50%) synthesized frames but no dropped frames nor Warp misses.

The RTX 4080 delivered 159.10 unconstrained FPS with 2 synthesized frames but with no dropped frames nor Warp misses.

The RTX 4090 achieved 201.96 unconstrained FPS together with 17 synthetic frames but with no dropped frames nor Warp misses.

RX 7900 XTX gamers may want to lower some individual settings to remain above 90 FPS. The RTX 4080 and RTX 4090 have enough performance headroom to increase the refresh rate, render resolution, or to perhaps use a higher resolution headset.

Let’s continue with another VR game, Project CARS 2, that we still like better than its successor even though it is no longer available for online play.

Project CARS 2 (PC2)

There is still a sense of immersion that comes from playing Project CARS 2 in VR using a wheel and pedals. It uses its in-house Madness engine, and the physics implementation is outstanding.

Project CARS 2 offers many performance options and settings.

Project CARS 2 performance settings

We used maximum settings including for Motion Blur but picked SMAA Ultra instead of MSAA.

Here is the frametime plot.

Here are the FCAT-VR details.

The RX 7900 XTX delivered 194.77 unconstrained FPS with no synthesized nor dropped frames or Warp misses.

The RTX 4080 got 200.88 unconstrained FPS with no synthesized frames nor dropped frames and no Warp misses.

The RTX 4090 achieved 253.50 unconstrained FPS together with 3 synthetic frames plus 2 dropped frames and 2 Warp misses.

The experience playing Project CARS 2 using maximum settings is similar for all three video cards.

Next we will check out a classic VR game, Skyrim VR.

Skyrim VR

Skyrim VR is an older game that is no longer supported by Bethesda, but fortunately the modding community has adopted it. It is not as demanding as many of the newer VR ports so its performance is still very good on maxed-out settings using its Creation engine.

We benchmarked vanilla Skyrim using its highest settings plus we increased the in-game Supersample option to maximum.

Here are the frametime results.

Here are the details of our comparative runs as reported by FCAT-VR.

The RX 7900 XTX provided 218.2 unconstrained FPS with no synthesized or dropped frames nor Warp misses.

The RTX 4080 achieved 239.08 unconstrained FPS with 2 synthetic frames plus 2 dropped frames and 1 Warp miss.

The RTX 4090 delivered 337.76 unconstrained FPS together with 2 synthetic frame and with 2 dropped frames plus 1 Warp miss.

All cards deliver an identical vanilla Skyrim VR experience with a ton of extra performance headroom to add mods and, in addition, to raise the render resolution using the two faster cards.

Next we check out Sniper Elite VR.

Sniper Elite VR

Sniper Elite VR’s visuals are decent with good texture work that is well-realized. The building architecture and panoramas look good, explosions are convincing and the weapons convey a sense of weight, although not achieving realism. It is primarily an arcade style sniping game featuring its signature X-Ray kill cam, but it offers multiple ways to achieve goals including with explosives and by using three other main weapon choices besides your rifle.

We benchmarked using the Highest settings.

Here is the frametime plot.

Here are the details:

The RX 6900 XT delivered 197.98 unconstrained FPS with no synthesized or dropped frames nor Warp misses.

The RTX 4080 delivered 223.33 unconstrained FPS with no synthesized or dropped frames nor Warp misses.

The RTX 4090 brought 318.03 unconstrained FPS together with 1 synthetic and 1 dropped frames and 1 Warp miss.

All three cards deliver a similar playing experience on High with the RTX cards offering more performance headroom. We recommend that any performance headroom be used for increasing the SteamVR render resolution.

Last up, The Walking Dead: Saints & Sinners.

The Walking Dead: Saints & Sinners

The Walking Dead: Saints & Sinner is the last of BTR’s 10 VR game benching suite. It is a first person survival horror adventure RPG with a strong emphasis on crafting. Its visuals using the Unreal 4 engine are very good and it makes good use of physics for interactions.

We benchmarked Saints and Sinners using its High preset and we left the Pixel Density at 100%. Here is the frametime chart.

Here are the details as reported by FCAT-VR.

The RX 7900 XTX delivered 198.93 unconstrained FPS with no synthetic nor dropped frames or Warp misses.

The RTX 4080 got 260.94 unconstrained FPS with 1 synthetic frames and 1 dropped frames and 1 Warp miss.

The RTX 4090 achieved 366.41 unconstrained FPS together with 6 synthetic frames and with 4 dropped frames and 4 Warp misses.

The RX 7900 XTX experience was marred by artifacting and shimmering.

Let’s check out synthetic VR tests and unconstrained framerates.

Unconstrained Framerates & Synthetic VR Benchmarks

The following chart summarizes the overall Unconstrained Framerates (the performance headroom) of our three cards using our 10 VR test games. In addition, we added recent RTX 3080 Ti and 6900 XT results for comparison. The preset is listed on the chart and higher is better. In addition, we present three synthetic VR benchmarks.

Although synthetic VR benches (except for OpenVR benchmark) predicted good VR performance, we were disappointed with our 7900 XTX VR experience, unlike with pancake games. In at least two games, we experienced distracting visual artifacting and texture shimmering. The 7900 series may benefit from some attention to VR from the Radeon driver team as in many cases it even falls behind the RX 6900 XT.

At AMD’s press event in Las Vegas, the presenters noted that AMD drivers continue to improve for the entire life of the architecture – generally with an up to 10% performance gain – often compared to “fine wine” aging well. However, for VR enthusiasts today, the RX 7900 XTX is disappointing and it performs well behind the RTX 4080 not logging a single performance win.

We next look at creative, pro, GPGPU, and workstation apps.

Creative, Pro & Workstation Apps

Let’s look at non-gaming applications next to see if the RX 7900 XTX is a good upgrade from the other video cards that we tested starting with Blender.

Blender 3.3.0 Benchmark

Blender is a very popular open source 3D content creation suite. It supports every aspect of 3D development with a complete range of tools for professional 3D creation.

We benchmarked three Blender 3.3.0 benchmarks which measure GPU performance by timing how long it takes to render production files. We tested seven of our comparison cards using CUDA, Optix, and OpenCL.

For the following chart, higher is better as the benchmark renders a scene multiple times and gives the results in samples per minute.

The RX 7900 XTX sits well ahead of the RX 6800 XT and 6900 XT but well behind the GeForce cards.

Next, we move on to AIDA64 GPGPU benchmarks.

AIDA64 v6.80

AIDA64 is an important industry tool for benchmarkers. Its GPGPU benchmarks measure performance and give scores to compare against other popular video cards.

AIDA64’s benchmark code methods are written in Assembly language, and they are well-optimized for every popular AMD, Intel, NVIDIA and VIA processor by utilizing the appropriate instruction set extensions. We use the Engineer’s full version of AIDA64 courtesy of FinalWire. AIDA64 is free to to try and use for 30 days. CPU results are also shown for comparison with both the RTX 3070 and GTX 2080 Ti GPGPU benchmarks.

Here are the Hellhound RX 7900 XTX AIDA64 GPGPU results compared with an overclocked i9-13900KF.

Here is the chart summary of the AIDA64 GPGPU benchmarks with seven of our competing cards side-by-side.

The RX 7900 XTX is a fast GPGPU card and it compares favorably with the competing cards being weaker in some areas and stronger in others. So let’s look at Sandra 2020 next.

SiSoft Sandra 2020

To see where the CPU, GPU, and motherboard performance results differ, there is no better tool than SiSoft’s Sandra 2020. SiSoftware SANDRA (the System ANalyser, Diagnostic and Reporting Assistant) is a excellent information & diagnostic utility in a complete package. It is able to provide all the information about your hardware, software, and other devices for diagnosis and for benchmarking.

There are several versions of Sandra, including a free version of Sandra Lite that anyone can download and use. Sandra 2020 R10 is the latest version, and we are using the full engineer suite courtesy of SiSoft. Sandra 2020 features continuous multiple monthly incremental improvements over earlier versions of Sandra. It will benchmark and analyze all of the important PC subsystems and even rank your PC while giving recommendations for improvement.

We ran Sandra’s intensive GPGPU benchmarks and charted the results summarizing them.

In Sandra GPGPU benchmarks, since the architectures are different, each card exhibits different characteristics with different strengths and weaknesses. However, we see some very solid solid improvement of the RX 7900 XTX over the RX 6900 XT and the RX 6800 XT.

SPECworkstation3 (3.0.4) Benchmarks

All the SPECworkstation3 benchmarks are based on professional applications, most of which are in the CAD/CAM or media and entertainment fields. All of these benchmarks are free except for vendors of computer-related products and/or services.

The most comprehensive workstation benchmark is SPECworkstation3. It’s a free-standing benchmark which does not require ancillary software. It measures GPU, CPU, storage and all other major aspects of workstation performance based on actual applications and representative workloads. We only tested the GPU-related workstation performance as checked in the image above.

Here are our SPECworkstation 3.0.4 raw scores for the Hellhound RX 7900 XTX. RTX 4080 raw scores are displayed below the XTX results for a detailed performance comparison.

Here are our RTX 4080 SPECworkstation 3.1 raw scores:

Here are the Hellhound XTX SPECworkstation3 results summarized in a chart along with six competing cards. Higher is better.

Using SPEC benchmarks, since the architectures are different, the cards each exhibit different characteristics with different strengths and weaknesses.

SPECviewperf 2020 GPU Benches

The SPEC Graphics Performance Characterization Group (SPECgpc) has released a new 2020 version of its SPECviewperf benchmark recently that features updated viewsets, new models, support for both 2K and 4K display resolutions, and improved set-up and results management.

We benchmarked at 4K and here are the summary results for the Hellhound RX 7900 XTX.

Here are SPECviewperf 2020 Hellhound RX 7900 XTX benchmarks summarized in a chart together with six other cards.

Again we see different architectures with different strengths and weaknesses. After seeing these benches, some creative users may upgrade their existing systems with a new card based on the performance increases and the associated increases in productivity that they require.

The question to buy a new video card should be based on the workflow and requirements of each user as well as their budget. Time is money depending on how these apps are used. However, the target demographic for the reference and Hellhound RX 7900 XTXs are primarily gaming for gamers.

Let’s head to our conclusion.

The Conclusion

The Hellhound RX 7900 XTX improves significantly over the last generation RX 6900 XT, easily exceeds RX 6800 XT performance, and it trades blows with the $200 more expensive RTX 4080 FE in rasterized games although overall it is slightly slower using our 42-game benching suite. The Hellhound RX 7900 XTX beats all of the last generation cards including the RTX 3080 Ti although it still struggles with ray traced games compared with RTX cards.

For Radeon gamers, the Hellhound RX 7900 XTX is a good alternative to GeForce Ada Lovelace cards for the vast majority of modern PC games that use rasterization. The RX 7900 XTX offers 24GB of GDDR6 to the 16GB of GDDR6X that the RTX 4080s are equipped with, but that 8GB of vRAM shouldn’t make any practical difference to game performance in the near future.

At its suggested price of $999, the Hellhound RX 7900 XTX costs about $200 less than the RTX 4080 FE and offers a good value for Radeon gamers. Unlike with the RTX 4080 which increased from $700 for the RTX 3080 to $1200, the RX 7900 XTX is priced the same $999 as AMD’s last generation RX 6900 XT. For Radeon buyers, what makes the Hellhound XTX particularly attractive is that there is no price premium for this mildly overclocked PowerColor card.

The only real issue that we see with Radeon 7000 series cards is that AMD’s FSR solution is still inferior to Nvidia’s DLSS AI upscaling that delivers similar performance but with better image quality. On the flip side, there are still relatively few ray traced games released every year in comparison to thousands of rasterized games where the RTX 7900 XTX trades blows with the much more expensive RTX 4080.

One major issue although affecting relatively few gamers is poor VR RX 7900 XTX performance compared with the RTX 4080. It’s going to need some attention from AMD’s driver team before we can recommend the RX 7900 XTX for the best VR gaming.

We recommend the Hellhound RX 7900 XTX as a great choice out of multiple good choices, especially for any AMD PC gamer looking for good looks with LED lighting, an exceptional cooler, great performance for 2560×1440 or 4K, PowerColor’s excellent support, and overall better value compared with the slower RX 7900 XTX reference version.

Let’s sum it up:

Hellhound RX 7900 XTX Pros

  • The PowerColor Hellhound RX 7900 XTX is much faster than the last generation RX 6900 XT by virtue of new RDNA 3 architecture. It trades blows in the majority of rasterized games with the RTX 4080 FE for significantly less money ($200 less)
  • The Hellhound RX 7900 XTX has excellent cooling with very little noise and has a very good power delivery and a 3-fan custom cooling design that is very quiet when overclocked even using the OC mode
  • Dual-BIOS give the user a choice of quiet with less overclocking, or a bit louder with more power-unlimited and higher overclocks
  • FidelityFX 2.0 allows for upscaling and improved sharpness with almost no performance penalty, and there is a low latency mode for competitive gamers
  • LED lighting and a neutral color allow the Hellhound RX 7900 XTX to fit into any color scheme
  • 24GB vRAM compared with 16GB for the RTX 4080

Hellhound XTX Cons

  • Cost. It’s still very expensive at $999
  • VR performance is subpar
  • Weaker ray tracing performance than the RTX 4080

The Hellhound RX 7900 XTX is a good Radeon card choice for those who game at 2560×1440 or at 4K and want the best that AMD has to offer. It represents a good gaming alternative to the RTX 4080 albeit with weaker ray tracing performance. It is offered especially for those who prefer AMD cards and FreeSync2 enabled displays which are generally less expensive than Gsync displays. And if a gamer is looking for something extra above the reference version, the PowerColor Hellhound RX 7900 XTX is a very well-made and good-looking card that will overclock better.

We are giving the Hellhound RX 7900 XTX BTR’s Recommended Award.

The Verdict:

  • PowerColor’s Hellhound RX 7900 XTX is a solidly-built handsome card with higher clocks out of the box than the same-priced reference version. It trades blows with the RTX 4080 in rasterized games. I t is a kick ass RX 7900 XTX.

Stay tuned, there is much more coming from BTR. We will soon return to VR with a mega performance evaluation to test the role of the CPU for VR performance. And we’ll retest the RX 7900 XTX using higher resolution headsets after AMD’s driver team has a chance to address it’s VR issues. We also plan to test Intel ARC video cards in VR.

Happy Gaming!

]]>
Intel 12th Gen Windows 11 vs. Windows 10 Performance Analysis https://babeltechreviews.com/windows-11-vs-windows-10-performance-analysis/ https://babeltechreviews.com/windows-11-vs-windows-10-performance-analysis/#comments Tue, 14 Jun 2022 04:56:00 +0000 /?p=27632 Read more]]> Windows 11 vs. Windows 10 Performance Analysis including HAGS – 40 games & Workstation Benchmarks with i7-12700KF/RTX 3080

This Windows 11 versus Windows 10 performance analysis is a follow-up to this week’s review of PC Gamerz Hawaii premium Blue Elite 12700KF/RTX 3080/DDR4 prebuild. They are still using Windows 10 Pro so we performed all of our detailed benching on that operating system and then did a clean install of Windows 11 Pro using the same settings. We will see if there are any performance advantages or disadvantages for Intel 12th Gen gamers or creators still on Windows 10 who are considering an upgrade.

We benchmarked 20 games with HAGS (Hardware Accelerated GPU Scheduling) off between Windows 10 and 11 and then turned on HAGS for all 40 Windows 11 games. Before we head to the performance chart featuring 40 games plus creative, workstation, and professional benchmarks, it’s important to detail the hardware and software configuration used for our for our benchmarking as well as our testing methodology.

Test Configuration – Hardware

PC Gamerz Hawaii Blue Elixir

  • Intel Core i7-12700KF (HyperThreading/Turbo boost On) (All listed Blue Elixir hardware except the portable SSD supplied by PC GamerZ Hawaii)
  • ASUS TUF Gaming H670-PRO WIFI D4 (Intel H670 chipset, latest BIOS, PCIe 5.0/5.0/3.0/3.1/3.2 specification, CrossFire/SLI 8x+8x)
  • G.SKILL Trident Z 16GB DDR4 (2x16GB, dual channel at 3600MHz)
  • Crucial P2 1TB NVMe SSD PCIe 3.0 (2400MBps/1900MBps Read/Write) for C: drive
  • The T-FORCE M200 4TB USB 3.2 Gen2x2 Type-C Portable SSD (supplied by Team Group for game storage)
  • EVGA 850B5, 850W Bronze PSU
  • ACER (LC27G75TQSNXZA) 27? 1920×1080/165Hz monitor
  • Lian-Li Galahad 360 AIO Cooler
  • CoolerMaster TD500 Mesh White

Test Configuration – Software

  • GeForce 512.77
  • High Quality, prefer maximum performance, single display, set in the NVIDIA control panel; Vsync off.
  • Optimizations are off, Vsync is forced off, Texture filtering is set to High Quality, and Power management prefer maximum performance
  • AA enabled as noted in games; all in-game settings are specified with 16xAF always applied
  • Highest quality sound (stereo) used in all games
  • All games have been patched to their latest versions
  • Gaming results show average frame rates in bold including minimum frame rates (1% lows/99 percentiles) shown on the chart next to the averages in a smaller italics font where higher is better.
  • Windows 11 Pro edition clean install and Windows 10 64-bit Pro edition; latest updates. DX11 titles are run under the DX11 render path. DX12 titles are generally run under DX12, and multiple games use the Vulkan API.
  • Latest DirectX

Games

Vulkan

  • DOOM Eternal
  • Wolfenstein Youngblood
  • Red Dead Redemption 2
  • Ghost Recon: Breakpoint
  • World War Z
  • Strange Brigade
  • Rainbow 6 Siege

DX12

  • God of War
  • Ghostwire: Tokyo
  • Elden Ring
  • Dying Light 2
  • Forza Horizon 5
  • Call of Duty: Vanguard
  • Guardians of the Galaxy
  • Far Cry 6
  • Chernobylite
  • Resident Evil Village
  • Metro Exodus Enhanced Edition
  • Hitman 3
  • Godfall
  • DiRT 5
  • Assassin’s Creed: Valhalla
  • Cyberpunk 2077
  • Watch Dogs: Legion
  • Horizon Zero Dawn
  • Death Stranding
  • F1 2021
  • Borderlands 3
  • Tom Clancy’s The Division 2
  • Battlefield V
  • Shadow of the Tomb Raider
  • Civilization VI – Gathering Storm Expansion
  • Shadow of the Tomb Raider

DX11

  • Total War: Warhammer III
  • Days Gone
  • Crysis Remastered
  • Destiny 2 Shadowkeep
  • Total War: Three Kingdoms
  • Overwatch
  • Assetto Corsa: Competizione
  • Grand Theft Auto V

Synthetic

  • TimeSpy (DX12)
  • 3DMark FireStrike & Extreme
  • Superposition
  • VRMark Blue Room
  • Cinebench
  • GeekBench
  • OctaneBench
  • AIDA64 CPU, cache & memory, and GPGPU benchmarks
  • Blender 3.01 benchmark
  • Sandra 2021 CPU Benchmarks
  • SPECviewperf 2020
  • SPEC Workstation

NVIDIA Control Panel settings

Here are the NVIDIA Control Panel settings.

Let’s head to the performance charts.

Performance Summary Charts

Here are the performance results of 40 games and 5 synthetic tests comparing the performance of Windows 11 with Windows 10 using PCGz’ Blue Elixir. Click on each chart to open in a pop-up for best viewing.

All gaming results show average framerates in bold text, and higher is better. Minimum framerates (1% lows/99-percentiles) are next to the averages in italics and in a slightly smaller font. We picked the highest settings as shown on the charts. Wins (or ties) are show by yellow text.

Windows 10 HAGS off vs. Windows 11 HAGS off vs. Windows 11 HAGS on

We first benchmarked 20 games with HAGS (Hardware Accelerated GPU Scheduling) off between Windows 10 and 11, and also using Windows 11 with HAGS on vs. off. We did not benchmark Windows 10 with HAGS on. Column 1 shows the Windows 10 results (HAGS off) versus Column 2 which shows Windows 11 results (HAGS off). Performance wins are in yellow text. Column 2 repeats Windows 11 results with HAGS off versus Column 3 with Windows 11 with HAGS on. We use a slightly darker yellow text to show Windows 11 performance wins between HAGS off versus on.

Interestingly, nineteen of twenty average results between HAGS off Windows 10 and Windows 11 are approximately within what is considered the 3% margin of benchmarking error. Civilization VI, a CPU-heavy benchmark, is the only outlier that favors Windows 10. When HAGS is turned on for Windows 11, Civ’s performance normalizes between the OSes. HAGS on for Windows 11 doesn’t appear to give any significant performance disadvantages so we benchmarked all 40 games with HAGS on for Windows 11 in the following set of charts.

Windows 10 HAGS off vs. Windows 11 HAGS on

Column 1 shows the Windows 10 performance results (HAGS off) versus Column 2 which shows Windows 11 results (HAGS on). Performance wins are in yellow text.

Again, most of the results fall within the benchmarking margin of error. For most of the games, the average performance is quite close. Notable outliers occur in several minimums and especially with the CPU-heavy benchmark, Total War: Three Kingdoms, where Windows 11 minimums are far higher than with Windows 10.

Let’s look at non-gaming applications next to see how Windows 11 compares to Windows 10 in creative/workstation/pro tasks starting with Blender benchmarks.

Blender 3.01 Benchmark

Blender is a very popular open source 3D content creation suite. It supports every aspect of 3D development with a complete range of tools for professional 3D creation.

We benchmarked all three Open Data Blender.org benchmarks which measures both CPU and GPU performance by measuring samples per second by render production files.

For the following chart, higher is better as the benchmark renders a scene multiple times and gives the results as samples per second.

Windows 11 Blender benchmark results have a slight edge over Windows 10.

Next, we move on to AIDA64 CPU, Cache & Memory, and GPGPU benchmarks.

AIDA64 v6.70

AIDA64 is an important industry tool for benchmarkers. Its GPGPU benchmarks measure performance and give scores to compare against other popular video cards while it’s CPU benchmarks compare relative performance of processors.

AIDA64’s benchmark code methods are written in Assembly language, and they are well-optimized for every popular AMD, Intel, NVIDIA and VIA processor by utilizing the appropriate instruction set extensions. We use the Engineer’s full version of AIDA64 courtesy of FinalWire. AIDA64 is free to to try and use for 30 days.

CPU/FPU Benchmark Results

AIDA64 CPU/FPU results are summarized below in two charts for comparison.

GPGPU Benchmark Summary

Here is the AIDA64 GPGPU comparison summarized between Windows 11 and Windows 10 below.

Cache & Memory Benchmarks

Here is the summary chart of the cache & memory benchmarks.

There are no real differences between AIDA64 Windows 11 and Windows 10 benchmark results. So let’s look at Sandra 2021 next.

SiSoft Sandra 2021

To see where the CPU, GPU, and motherboard performance results differ, there is no better tool than SiSoft’s Sandra 2021. SiSoftware SANDRA (the System ANalyser, Diagnostic and Reporting Assistant) is a excellent information & diagnostic utility in a complete package. It is able to provide all the information about your hardware, software, and other devices for diagnosis and for benchmarking. Sandra is derived from a Greek name that implies “defender” or “helper”.

There are several versions of Sandra, including a free version of Sandra Lite that anyone can download and use. Sandra 2021 is the latest version, and we are using the full engineer suite courtesy of SiSoft. Sandra 2021 features continuous multiple monthly incremental improvements over earlier versions of Sandra. It will benchmark and analyze all of the important PC subsystems and even rank your PC while giving recommendations for improvement.

We ran the latest version of Sandra’s intensive Processor benchmarks and summarize the overall results below.

In Sandra’s synthetic CPU benchmarks, Windows 10 scores higher than Windows 11.

Cinebench

Cinebench is based on MAXON’s professional 3D content creation suite, Cinema 4D. This latest R23 version of Cinebench can test up to 64 processor threads accurately and automatically. It is an excellent tool to compare CPU/memory performance and higher is better.

Cinebench’s Multi-Core benchmark will stress a CPU reasonably well over its 10-minute run and will show any weaknesses in CPU cooling. This is the test where we discovered that the Blue Elixir’s 12700K hit nearly 100C on Core 5 and lead us to conclude that the wrong LGA 1200 backplate was used by the PCGz builders instead of LGA 1700.

Here is the summary chart.

Windows 11 scores a bit higher than Windows 10 in Cinebench. Now we benchmark using GeekBench which measures CPU and GPU performance.

GeekBench

GeekBench is an excellent CPU/GPU benchmarking program which runs a series of tests and times how long the processor takes to complete its tasks. It focuses on the CPU Multi- and Single Core performance as well as GPU performance which tests OpenCL, CUDA, and Vulcan

The summary charts below show the comparative performance scores.

In Geekbench, Windows 11 tends to score higher than Windows 10.

Lets check out Octanebench, another GPU-heavy test

Octanebench

OctaneBench allows you to benchmark GPUs using OctaneRender. The hardware and software requirements to run OctaneBench are the same as for OctaneRender Standalone.

Here is the summary chart:

There is no meaningful difference between Octanebench benchmarks run on Windows 11 and Windows 10. Next up, SPECworkstation.

SPECworkstation3 (3.0.4) Benchmarks

All the SPECworkstation3 benchmarks are based on professional applications, most of which are in the CAD/CAM or media and entertainment fields. All of these benchmarks are free except to vendors of computer-related products and/or services. The most comprehensive workstation benchmark is SPECworkstation3. It’s a free-standing benchmark which does not require ancillary software. It measures GPU, CPU, storage and all other major aspects of workstation performance based on actual applications and representative workloads.

SPECworkstation benchmarks are very demanding and all benchmarks were tested in an official run.
Here are the SPECworkstation Raw Scores which give the details.
Windows 11 trades blows with Windows 10 in SPECworkstation3 benchmarks. Although a few individual benches refused to run on Windows 10, there is no clear winner.
Now, let’s look at a GPU-heavy SPEC benching suite, SPECviewperf 2020.

SPECviewperf 2020 GPU Benches

The SPEC Graphics Performance Characterization Group (SPECgpc) released a 2020 version of its SPECviewperf benchmark that features updated viewsets, new models, support for up to 4K display resolutions, and improved set-up and results management. We use 1900×1060 display resolution.Here are SPECviewperf 2020 benchmarks summarized in the chart below.

Again there is no clear winner between the OSes. Let’s head to our conclusion.

Final Thoughts

We can conclude from our benchmarking using the PCGz Hawaii Blue Elite i7-12700KF/DDR4/RTX 3080 FTW PC that there is very little performance difference between Windows 10 and Windows 11. There is no reason not to upgrade to Windows 11 although there appears to be no performance disadvantages to remaining on Windows 10.

As to enabling HAGS or not on Windows 11, we agree with Rodrigo who found some HAGS performance inconsistences but also concluded:

“Anyway, the HAGS feature is still quite promising and can improve performance in some cases, so we also recommend doing your testing to see how it works with your gaming rig and set of favorite games.”

Later this week, we will follow up with a T-FORCE NVMe SSD review and then with a VR review featuring the Hellhound RX 6650 XT versus the RX 6700 XT and versus the RTX 3060 Ti.

Happy Gaming!

]]>
https://babeltechreviews.com/windows-11-vs-windows-10-performance-analysis/feed/ 1
The Red Devil RX 6600 XT takes on the RTX 3060 & RTX 3060 Ti in 32 Games https://babeltechreviews.com/the-red-devil-rx-6600-xt-takes-on-the-rtx-3060/ Wed, 11 Aug 2021 04:44:22 +0000 /?p=24384 Read more]]> The PowerColor Red Devil RX 6600 XT takes on the RTX 3060 & RTX 3060 Ti in 32 Games

The Red Devil RX 6600 XT arrived at BTR for evaluation from PowerColor as a premium and overclocked 8GB vRAM-equipped 128-bit card with no manufacturer recommended (SEP/MSRP) pricing as yet although the base models start at a rather high $379 considering it is targeting 1080P. We have been exhaustively comparing it versus the $329 RTX 3060 EVGA Black 12GB and versus the $399 RTX 3060 Ti 8GB Founders Edition using 32 games, GPGPU, workstation, SPEC, and synthetic benchmarks.

We will also compare the performance of these competing cards with the RX 6600 XT’s bigger brother, the Red Devil RX 6700 XT (the reference card SEP is $479), and also with its predecessor the ASUS TUF Gaming X3 RX 5600 XT (at $309 which is $30 above AMD’s entry-level pricing of $279); and also with the RX 5700 XT Anniversary Edition ($499/$449 reference at launch).

The Red Devil RX 6600 XT is factory clocked higher than the reference specifications using its OC BIOS. While the reference Radeon RX 6600 XT offers a Game clock up to 2359MHz and a Boost clock of 2589MHz, the PowerColor Red Devil game clocks up to 2428MHz and boosts to 2607MHz. It also looks different from older generation classic Red Devils, arriving in a more neutral gray color instead of in all red and black. The Red Devil RX 6600 XT features a RGB mode whose LEDs default to a bright red which may be customized by PowerColor’s DevilZone software.

The Reference and Red Devil RX 6600 XT Features & Specifications

Source: PowerColor

First let’s look at the Red Devil RX 6600 XT specifications:

Additional Information from PowerColor

PowerColor newest 6600 XT Red Devil card, is positioned to compete directly with the custom 3060 premium models.

  • The card has 2 modes, OC and Silent. 145W / 135W Power target. There’s a bios switch on the side of the card. We designed this card to be very quiet, even on performance mode is considerably quieter most silent cards, but we also advise to try the silent mode as it’s truly whisper quiet, with a normal case with a optimal airflow, you most likely see the card run around 1000 Rpms under this mode.
  • The board has 10 Phase VS the 6+2 Phase VRM design on the standard designs meaning is over spec’d in order to deliver the best
    stability and overclock headroom by having such VRM it will run cooler and last longer.
  • DrMos and high-polymer Caps are used on our Design, no compromises.
  • DUAL FAN, at this TDP there is no need of oversized 3 fan coolers, better sized and yet efficient cooling!
  • Our cooler features 2 x 100mm ,all with two ball bearing fans with 4 heat pipes (4X6Φ) across the high density heatsink with large nickel plated base.
  • RGB is enhanced, Red Devil now connects to the motherboard aRGB (5v 3 pin connector) for RGB Sync.
  • Red Devil has Mute fan technology, fans stop under 60c!
  • The ports are LED illuminated. Now you can see in the dark where to plug.
  • The card back plate does not have thermal pads but instead we did cuts across the backplate for the PCB to breath, which under high
    heat scenarios is more beneficial than having thermal pads as the back plate can become a heat trap.
  • Red Devil buyers will be able to join exclusive giveaway as well access to the Devil Club website. A membership club for Devil users only which gives them access to News, Competitions, Downloads and most important instant support via Live chat.

RX 6000 features

Source: AMD

AMD has their own ecosystem for gamers and many unique new features for the Radeon 6000 series. However, the above slide from AMD does not mention two features – the Infinity Cache and Smart Access Memory.

Infinity Cache & Smart Access Memory

AMD’s RDNA 2 architecture includes the Infinity Cache which alters the way data is delivered to GPUs. This global cache allows fast data
access and increases bandwidth. This optimized on-die cache uses 96MB of AMD Infinity Cache delivering up to 2.5x the effective bandwidth compared to 256-bit 12Gbps GDDR6.

BTR uses Intel’s 10th generation flagship CPU, the i9-10990K which does not have this cache available so our results will probably be lower than what a gamer using a full Ryzen 5000 platform will achieve. In addition, we don’t have Smart Access Memory.

AMD’s Smart Access Memory is a new feature for the Radeon RX 6000 Series graphics cards that enables additional memory space to be mapped to the base address register resulting in performance gains for select games when paired with an AMD Ryzen 5000 Series processor or with some Ryzen 3000 series CPUs. Using PCIe, the Base Address Register (BAR) defines how much GPU memory space can be mapped. Without using Smart Access Memory, CPUs can generally access up to 256MB of GPU memory restricting performance somewhat.

NVIDIA has worked with its partners and with Intel to enable Resizable BAR which currently is enabled for the EVGA Z490 FTW motherboard but it only works for GeForce cards. When we tried to enable it for the RX 6600 XT, our PC refused to boot after following AMD’s instructions using a clean installation of Windows. So we disabled it and tested all of our video cards and games without Resizable BAR.

The Test Bed

BTR’s test bed consists of 32 games and 3 synthetic game benchmarks at 1920×1080 and 2560×1440, as well as SPEC, workstation, and GPGPU benchmarks. Our latest games include Chernobylite and F1 2021. The testing platform uses a recent installation of Windows 10 64-bit Pro Edition, and our CPU is an i9-10900K which turbos all 10 cores to 5.1/5.0GHz, an EVGA Z490 FTW motherboard, and 32GB of T-FORCE Dark Z DDR4 at 3600MHz. The games, settings, and hardware are identical except for the cards being compared.

First, let’s take a closer look at the new PowerColor Red Devil RX 6600 XT.

A Closer Look at the Red Devil RX 6600 XT

Although the Red Devil RX 6600 XT advertises itself as a premium 7nm 8GB vRAM-equipped card on AMD’s RDNA 2 architecture which features 1080P and PCIe 4.0, the cover of the box favors stylized imagery over text.

The back of the box touts key features which now include HDMI 2.1 VRR, ray tracing technology, FidelityFX, and VR Ready Premium as well as states its 600W power and system requirements. AMD’s technology features are highlighted and the box features PowerColor’s custom cooling solution, Dual-BIOSes, RGB software and output LEDs, and a solid backplate with the Red Devil logo that also lights up.

Opening its very well-padded box, we see a quick installation guide, a RGB LED cable, and an invitation to join PowerColor’s Devil’s Club.

The Red Devil RX 6600 XT is a dual-fan card. Turning the Red Devil over (below) we see a solid backplate that features the devil logo that also lights up.

The Red Devil RX 6600 XT is a medium-sized dual-fan card (251mm long x 133mm tall x 54mm thick) in a 2 slot design which is quite handsome with PowerColor’s colors and even more striking with the RGB on.There is also a switch to choose between the default overclock (OC) BIOS and the Silent BIOS (above, right). Using the OC BIOS the card has a 145W power target and using the Silent BIOS it has a 135W power target. We didn’t bother with the Silent BIOS as the card is very quiet using the OC BIOS, but it is good to have in case a flash goes bad.

The Red Devil uses one 1×8-pin and 1×6-pin PCIe connections while the reference version uses 1×8-pin. We would suggest that with the current voltage limitations and low power draw, the extra connector is probably not really necessary even for overclocking unless the end user circumvents the power restrictions using MPT at their own risk. Looking at the edges, we can see it is all heatsink fins for cooling as is typical of Red Devil cards, and we expect it to run cool.

Above, the PowerColor Red Devil RX 6600 XT’s other end also lights up giving the card an aggressive look.

The Red Devil’s RX 6600 XT’s connectors include 3 DisplayPorts and 1 HDMI connector. There is an LED that illuminates this panel for making easier connections in the dark.

The Red Devil looks great when it is running in a PC.

The specifications look good and the Red Devil itself looks great with its default RGB bright red contrasting with the black backplate and its aggressively lit-up front end perhaps is stylistically reminiscent of an automotive grill or perhaps teeth. The end user may enhance and coordinate the RGB colors by connecting to the motherboard using a supplied aRGB (5v 3-pin) connector using the DevilZone RGB software.

Let’s check out its performance after we look over our test configuration and more on the next page.

Test Configuration – Hardware

  • Intel Core i9-10900K (HyperThreading/Turbo boost On; All cores overclocked to 5.1GHz/5.0Ghz. Comet Lake DX11 CPU graphics)
  • EVGA Z490 FTW motherboard (Intel Z490 chipset, v1.9 BIOS, PCIe 3.0/3.1/3.2 specification, CrossFire/SLI 8x+8x), supplied by EVGA
  • T-FORCE DARK Z 32GB DDR4 (2x16GB, dual channel at 3600MHz), supplied by Team Group
  • Red Devil RX 6600 XT 8GB, factory settings and overclocked, on loan from PowerColor
  • Red Devil RX 6700 XT 12GB, factory settings and overclocked, on loan from PowerColor
  • ASUS TUF Gaming X3 RX 5600 XT 6GB, stock settings, on loan from ASUS
  • Radeon RX 5700 XT 8GB Anniversary Edition, stock AE clocks.
  • EVGA RTX 3060 Black 12GB, stock clocks, on loan from EVGA
  • RTX 3060 Ti Founders Edition 8GB, stock clocks, on loan from NVIDIA
  • 2 x 1TB Team Group MP33 NVMe2 PCIe SSD for C: drive; one for AMD and one for NVIDIA
  • 1.92TB San Disk enterprise class SATA III SSD (storage)
  • 2TB Micron 1100 SATA III SSD (storage)
  • 1TB Team Group GX2 SATA III SSD (storage)
  • 1GB T-FORCE Delta MAX SSD (storage), supplied by Team Group
  • ANTEC HCG1000 Extreme, 1000W gold power supply unit
  • Samsung G7 Odyssey (LC27G75TQSNXZA) 27″ 2560×1440/240Hz/1ms/G-SYNC/HDR600 monitor
  • DEEPCOOL Castle 360EX AIO 360mm liquid CPU cooler
  • Phanteks Eclipse P400 ATX mid-tower (plus 1 Noctua 140mm fan)

Test Configuration – Software

  • Adrenalin 2021 Edition 21.7.1 press drivers used for the RX 6600 XT and 21.7.2 used for the other Radeons except for 21.2.3 used for the RX 5700 XT.
  • GeForce 471.41 for the RTX 3060 and the RTX 3060 Ti.
  • High Quality, prefer maximum performance, single display, set in the NVIDIA control panel; Vsync off.
  • All optimizations are off, Vsync is forced off, Texture filtering is set to High, and Tessellation uses application settings in the AMD control panel.
  • AA enabled as noted in games; all in-game settings are specified with 16xAF always applied
  • Highest quality sound (stereo) used in all games
  • All games have been patched to their latest versions
  • Gaming results show average frame rates in bold including minimum frame rates shown on the chart next to the averages in a smaller italics font where higher is better. Games benched with OCAT show average framerates but the minimums are expressed by frametimes (99th-percentile) in ms where lower numbers are better.
  • Windows 10 64-bit Pro edition; latest updates. DX11 titles are run under the DX11 render path. DX12 titles are generally run under DX12, and multiple games use the Vulkan API.
  • Latest DirectX

Games

Vulkan

  • DOOM Eternal
  • Red Dead Redemption 2
  • Ghost Recon: Breakpoint
  • World War Z
  • Strange Brigade
  • Rainbow 6 Siege

DX12

  • F1 2021
  • Resident Evil Village
  • Hitman 3
  • Cyberpunk 2077
  • DiRT 5
  • Godfall
  • Call of Duty Black Ops: Cold War
  • Assassin’s Creed: Valhalla
  • Watch Dogs: Legion
  • Horizon Zero Dawn
  • Death Stranding
  • Tom Clancy’s The Division 2
  • Borderlands 3
  • Metro Exodus & Metro Exodus Enhanced Edition
  • Civilization VI – Gathering Storm Expansion
  • Battlefield V
  • Shadow of the Tomb Raider
  • Forza 7

DX11

  • Chernobylite
  • Days Gone
  • Crysis Remastered
  • Destiny 2 Shadowkeep
  • Total War: Three Kingdoms
  • Far Cry New Dawn
  • Assetto Corsa: Competitione
  • Grand Theft Auto V

Synthetic

  • TimeSpy (DX12)
  • 3DMark FireStrike – Ultra & Extreme
  • Superposition
  • Heaven 4.0 benchmark
  • AIDA64 GPGPU benchmarks
  • Blender 2.931 benchmark
  • Sandra 2021 GPGPU Benchmarks
  • SPECworkstation3
  • SPECviewperf 2020

NVIDIA Control Panel settings

Here are the NVIDIA Control Panel settings.

Next the AMD settings.

AMD Adrenalin Control Center Settings

All AMD settings are set so that all optimizations are off, Vsync is forced off, Texture filtering is set to High, and Tessellation uses application settings. All Navi cards are capable of high Tessellation unlike earlier generations of Radeons.

Anisotropic Filtering is disabled by default but we always use 16X for all game benchmarks.

Let’s check out overclocking, temperatures and noise next.

Overclocking, temperatures and noise

We spent a lot of time overclocking the Red Devil RX 6600 XT for this review. The Red Devil is factory clocked higher than the reference specifications using its OC BIOS. While the reference Radeon RX 6600 XT offers a Game clock up to 2359MHz and a Boost clock of 2589MHz, the PowerColor Red Devil game clocks up to 2428MHz and boosts to 2607MHz.

Above are the reference RX 6600 XT Wattman default settings which include leaving the power limit at default. The performance didn’t matter whether the power limit was set to default or higher even when overclocked. Although the Red Devil boosts to 2607MHz, we typically saw clocks at around 2575MHz and the GPU stayed cool. The fan speeds are not tracked by Wattman but they remained low and we could not hear them over our other case fans.

The Wattman auto overclock feature is useless as it gave an extremely low overclock so we used trial and error to find Red Devil’s maximum performance at the edge of stability. We settled on increasing the memory to 115% (2284MHz) and increasing the core clock by 7% (2776MHz) as below.

At maximum overclock, the clocks run from 2686MHz to a peak of 2720MHz, but this time the temperatures drop below 60C as the fan speeds increase. Even while overclocked to the max, the Red Devil remains very quiet and cool with power consumption just approaching 140W.

There is a small performance increase from overclocking the RX 6600 XT core by 7% and increasing the memory by 15%. Unfortunately, AMD has again locked all RX 6600 XT cards overclocking down in an attempt to maximize overall performance by limiting the voltage to 1150mV. We would also suggest that the RX 6600 XT is rather voltage constrained and the Red Devil could seriously benefit by more voltage. We suspect that some enthusiast gamers will use MPT (More Power Tool) and risk their warranty to gain a substantially higher Red Devil overclock although we cannot recommend it.

We believe that the Red Devil’s overclock will not degrade over time as its PCB components are fit to run all the time at the highest overclock settings – perhaps unlike entry level versions which are not engineered for ultimate maximum reliability.

Of course, many gamers will want to fine-tune their own overclock and undervolting is a possibility although at 140W the Red Devil RX 6600 XT is not a power hog. Check the overclocking chart in the next section for performance increases using ten key games.

Let’s head to the performance charts to see how the performance of the RX 6600 XT compares with five other cards.

Performance summary charts

Here are the performance results of 32 games and 3 synthetic tests comparing the factory-clocked 8GB Red Devil RX 6600 XT with the EVGA RTX 3060 Black 12GB (reference) and versus the RTX 3060 Ti FE 8GB at their factory set clocks. Three other cards are added for comparison in the Big Picture. The highest settings are used and are listed on the charts. The benches were run at 1920×1080 and at 2560×1440. Click on each chart to open in a pop-up for best viewing.

Most gaming results show average framerates in bold text, and higher is better. Minimum framerates are next to the averages in italics and in a slightly smaller font. The games benched with OCAT show average framerates but the minimums are expressed by frametimes in ms where lower numbers are better.

The Red Devil RX 6600 XT vs. the RTX 3060 & RTX 3060 Ti

The first set of charts show our three main competing cards. Column one represents the RTX 3060 EVGA Black (reference speed) version ($329) performance, column two is the Red Devil RX 6600 XT (no SEP/reference $379), and column three represents the RTX 3060 Ti FE ($399) performance.

The Red Devil RX 6600 XT is faster overall than the RTX 3060 EVGA Black (reference) version but it is still in the same class, trading blows depending on the games tested. Since we do not use Resizable BAR or have Smart Access Memory, we expect that some games would shift in favor of the Radeon using a Ryzen 5000 platform. However, it is outclassed by the $20 more expensive RTX 3060 Ti, winning no games against it.

Let’s see how the reference and Red Devil RX 6600 XT fit in with our expanded main summary chart, the “Big Picture”, comparing a total of six cards.

The Big Picture

Here is how the Red Devil 6600 XT fits into a larger chart with six cards. The ASUS RX 5600 XT is in column one, the RX 5700 XT (Anniversary Edition) is in column 2, the EVGA Black RTX 3060 is in column 3, the Red Devil RX 6600 XT is in Column 4, the RTX 3060 Ti FE is in column 5, and the Red Devil RX 6700 XT is in column 6.

We see that the RX 6600 XT is a fair upgrade from the RX 5600 XT, but it is hard to believe that AMD has increased the price by $100 over the last midrange 1080P generation. The RX 6600 XT basically trades blows with the RX 5700 XT which launched at $399. We have to wonder what AMD was thinking when they set their pricing so high.

Ray Traced Benchmarks

The Red Devil RX 6600 XT is next compared with our other two main competing cards when ray tracing is enabled in ten games. No DLSS or FSR technologies are used.

The RX 6600 XT gets outperformed overall when compared with the $50 less expensive RTX 3060 after ray tracing is enabled. However, AMD has recently introduced FidelityFX Super Resolution (FSR) which is their answer to NVIDIA’s DLSS.

FidelityFX Super Resolution (FSR)

Source: AMD

FSR improves performance by first rendering frames at a lower resolution and then by using an open-source spatial upscaling algorithm with a sharpening filter in an attempt to make the game look nearly as good as at native resolution. NVIDIA’s DLSS is a more mature temporal upscaling solution that uses AI/Deep Learning. With DLSS, data is accumulated from multiple frames and combined into the final image with the AI reconstruction component running on GeForce RTX Tensor cores.

In contrast, FSR is basically a post-process shader which also makes it easy for game developers to implement across all graphics cards and not just for Radeons. So far, there are about a dozen games that use it and we have tested three games that use FSR. Although Ultra FSR is not the equal of DLSS – and especially not of DLSS 2.0 Quality which rivals and sometimes improves on the native image – it is still a very solid non-AI/temporal upscaler that provides good performance improvements.

Ultra FSR is far more than a standard Lanczos implementation plus sharpening and it brings good value to Radeons (and for all video cards!) for higher “free” performance with a minimal hit to visuals. We were especially impressed with the Ultra FSR implementation in Chernobylite. Below is a performance comparison of Quality DLSS 2.0 versus Ultra FSR.

We see the RX 6600 XT improve its framerates using Ultra FSR to match the RTX 3060 which uses Quality DLSS in Chernobylite.

Again, we see solid performance improvements with Godfall and in Resident Evil Village using Ultra FSR.

Next we look at overclocked performance.

Overclocked benchmarks

These ten benchmarks were run with both Red Devil RX 6600 XT overclocked as far as it can go while remaining stable as described in the overclocking section. The Red Devil manually overclocked card results are presented first and the factory-clocked results are in the second column.

There is a reasonable performance increase from manually overclocking the Red Devil RX 6600 XT beyond its factory clocks from about 2% up to around 10%.

Let’s look at non-gaming applications next to see if the RX 6600 XT is a good upgrade from the other video cards we test starting with Blender.

Blender 2.931 Benchmark

Blender is a very popular open source 3D content creation suite. It supports every aspect of 3D development with a complete range of tools for professional 3D creation.

We benchmarked three Blender benchmarks which measure GPU performance by timing how long it takes to render production files. We tested our comparison cards using OpenCL for the Radeons and CUDA on GeForce – all running on the GPU instead of using the CPU.

For the following chart, lower is better as the benchmark renders a scene multiple times and gives the results in minutes and seconds.

OpenCL is not as well-optimized for Radeons compared with CUDA for GeForce.

Next, we move on to AIDA64 GPGPU benchmarks.

AIDA64 v6.32

AIDA64 is an important industry tool for benchmarkers. Its GPGPU benchmarks measure performance and give scores to compare against other popular video cards.

AIDA64’s benchmark code methods are written in Assembly language, and they are well-optimized for every popular AMD, Intel, NVIDIA and VIA processor by utilizing the appropriate instruction set extensions. We use the Engineer’s full version of AIDA64 courtesy of FinalWire. AIDA64 is free to to try and use for 30 days. CPU results are also shown for comparison.

Here are the Red Devil RX 6600 XT AIDA64 GPGPU results compared with an overclocked i9-10900K.

Here is the chart summary of the AIDA64 GPGPU benchmarks with five of our competing cards side-by-side.

The RX 6600 XT is a fast GPGPU card and it compares favorably with Ampere cards, being weaker in some areas and stronger in others, and it’s a solid improvement over the last generation RX 5600 XT. So let’s look at Sandra 2020 next.

SiSoft Sandra 2020

To see where the CPU, GPU, and motherboard performance results differ, there is no better tool than SiSoft’s Sandra 2020. SiSoftware SANDRA (the System ANalyser, Diagnostic and Reporting Assistant) is a excellent information & diagnostic utility in a complete package. It is able to provide all the information about your hardware, software, and other devices for diagnosis and for benchmarking. Sandra is derived from a Greek name that implies “defender” or “helper”.

There are several versions of Sandra, including a free version of Sandra Lite that anyone can download and use. Sandra 2021 is the latest version, and we are using the full engineer suite courtesy of SiSoft. Sandra 2020 features continuous multiple monthly incremental improvements over earlier versions of Sandra. It will benchmark and analyze all of the important PC subsystems and even rank your PC while giving recommendations for improvement.

We ran Sandra’s intensive GPGPU benchmarks and charted the results summarizing them.

In Sandra GPGPU benchmarks, since the architectures are different, each card exhibits different characteristics with different strengths and weaknesses. However, we see solid improvements of the RX 6600 XT over the RX 5600 XT.

SPECworkstation3 (3.0.4) Benchmarks

All the SPECworkstation3 benchmarks are based on professional applications, most of which are in the CAD/CAM or media and entertainment fields. All of these benchmarks are free except for vendors of computer-related products and/or services.

The most comprehensive workstation benchmark is SPECworkstation3. It’s a free-standing benchmark which does not require ancillary software. It measures GPU, CPU, storage and all other major aspects of workstation performance based on actual applications and representative workloads. We only tested the GPU-related workstation performance as checked in the image above.

Here are our raw SPECworkstation 3.0.4.summary and raw scores for the Red Devil RX 6600XT that were tested at 1900×1060.

Here are the Red Devil SPECworkstation3 results summarized and included in a chart of our five competing cards. Higher is better.

Using SPEC benchmarks, since the architectures are different, the cards each exhibit different characteristics with different strengths and weaknesses.

SPECviewperf 2020 GPU Benches

The SPEC Graphics Performance Characterization Group (SPECgpc) has released a 2020 version of its SPECviewperf benchmark last year that features updated viewsets, new models, support for both 2K and 4K display resolutions, and improved set-up and results management. We use 2K display resolution for midrange cards like the RX 6600 XT.

Here are the summary results for the Red Devil RX 6600 XT.

Here are SPECviewperf 2020 GPU Red Devil RX 6600 XT benchmarks summarized in a chart together with four other cards.

Again we see different architectures with different strengths and weaknesses. The Red Devil RX 6600 XT is significantly faster than the RX 5600 XT.

After seeing these benches, some creative users may upgrade their existing systems with a new card based on the performance increases and the associated increases in productivity that they require. The question to buy a new video card should be based on the workflow and requirements of each user as well as their budget. Time is money depending on how these apps are used. However, the target demographic for the Red Devil RX 6600 XT is primarily 1080P gaming for gamers.

Let’s head to our conclusion.

Final Thoughts

The Red Devil RX 6600 XT improves significantly over the RX 5600 XT and it trades blows with and overall it beats the RTX 3060 in multiple rasterized games. The Red Devil RX 6600 XT beats the last generation cards including the RX 5600 XT although it struggles with ray traced games compared with GeForce cards. We somewhat handicapped the RX 6600 XT by not being able to use Infinity Cache & Smart Access Memory and we expect that performance would be higher if we used a Ryzen 5000 platform.

FSR brings a great value to the RX 6600 XT as an alternative to DLSS, although it cannot quite match it in visual quality. We look forward to further improvements in FSR and hope many more games use it.

For Radeon gamers, the Red Devil RX 6600 XT is a good alternative to the RTX 3060 for the vast majority of modern PC games that use rasterization. However, the RX 6600 XT offers 8GB of GDDR6 to the 12GB of GDDR6 that the RTX 3060 is equipped with. The RTX 3060, although it has 12GB of vRAM, appears to be wasted for that card.

At its suggested price of $379, or $20 less than the RTX 3060 Ti, the reference RX 6600 XT offers much less value – if the GeForce can be found at all at SEP. PowerColor has promised that the supply of the RX 6600 XT will be plentiful, but we are skeptical. This same thing has been promised for Ampere cards where the stock is still trickling in and being purchased the instant it’s available from etailers that are not hesitating to mark the prices up to double the SEP.

We think that AMD has set pricing too high on the RX 6600 XT by about $50. If it sold for $329 like the RTX 3060, it would be a really good value. They seem to forget that the competing GeForce is much stronger in ray traced games – with over 60 games featuring DLSS – and that FSR is brand new with only a dozen games supporting it so far. At $100 more than what the RX 5600 XT launched at, AMD has jacked up the price of 1080P gaming – pandemic shortages or no shortages – and it is not a consumer friendly move. However, for practical terms – if the RX 6600 XT can be found at MSRP/SEP – it is a good value for now as most other competing cards are still selling for double MSRP.

PowerColor hasn’t set any pricing on the Red Devil RX 6600 XT allowing the resellers to set theirs. They claim that their margins are actually below their usual historical low double-digit (10-12%) for a new product. Unfortunately, it’s hard to recommend any card with no suggested price even though it is overclocked, very nicely equipped, and well-built over a well-designed reference version for $379. We wish that we could say that “PowerColor thinks their Red Devil is worth $30 more than the reference version” – and we would agree. But now there is no pricing frame of reference.

We recommend the Red Devil RX 6600 XT as a great choice out of multiple good choices, especially if you are looking for good looks with RGB, an exceptional cooler, and great performance for 1920×1080, PowerColor’s excellent support, and overall good value assuming that the stock and price stabilizes. We are convinced that PowerColor is an outstanding AMD AIB, and we never hesitate to recommend it to our friends. When we have a choice, we pick and have picked PowerColor video cards for our own purchases.

Let’s sum it up:

The Red Devil RX 6600 XT Pros

  • The PowerColor Red Devil RX 6600 XT is much faster than the last generation RX 5600 XT by virtue of new RDNA 2 architecture. It beats the RTX 3060 in many raster games and is a great ultra 1080P card that can handle 1440P with lower settings.
  • FSR is an awesome added value that can greatly improves performance without impacting visuals significantly.
  • The Red Devil RX 6600 XT has excellent cooling and it is a very quiet card even when overclocked to its maximum
  • The Red Devil has a very good power delivery system and dual-fan custom cooling design
  • Dual-BIOS give the user a choice of quiet with less overclocking, or a bit louder with more power-unlimited and higher overclocks. It’s also a great safety feature if a BIOS flash goes bad
  • FreeSync2 HDR eliminates tearing and stuttering
  • Infinity Cache & Smart Access Memory give higher performance with Ryzen 5000 platform
  • Customizable RGB lighting and a neutral color allow the Red Devil to fit into any color scheme using the DevilZone software program.

Red Devil RX 6600 XT Cons

  • Pricing. $379 for a midrange 1080P card is $100 more than AMD’s RX 5600 XT. And PowerColor has set no SEP
  • Weaker ray tracing performance than the RTX 3060

If they can be found at suggested pricing, the Red Devil RX 6600 XT is a good card choice for those who game at 1920×1080, and it represents a good alternative to the RTX 3060 albeit with weaker ray tracing performance. They are offered especially for those who prefer AMD cards and FreeSync2 enabled displays which are generally less expensive than Gsync displays; and Infinity Cache & Smart Access Memory are a real plus for gamers using the Ryzen 5000 platform.

If a gamer is looking for something extra beyond the reference version, the Red Devil RX 6600 XT is a very well made and handsome RGB customizable card that will overclock decently and last a long time without performance degradation.

The Verdict:

  • PowerColor’s Red Devil RX 6600 XT is a solidly-built good-looking RGB card with higher clocks out of the box than the reference version and it overclocks decently. It trades blows with and overall beats the RTX 3060 in many rasterized games. Although we have no price, it is a kick-ass RX 6600 XT. Hopefully there will be some solid supply and the market pricing will normalize after the cryptocurrency pandemic ends (relatively soon!).

The Red Devil RX 6600 XT offers a good alternative to the RTX 3060 for solid raster performance in gaming, and it also beats the performance of AMD’s last generation by a good margin. However, everything will depend on pricing and availability.

This is what PowerColor boldly stated to us last week:

“There will be plenty of cards in the channel and we will have our base model Fighting that starts at 379$ at launch!
No Scalping prices, eTailers will have enough cards, if they raise the prices, someone else will sell for less.”

Good advice! We hope there is good availability, and if so, we can recommend the Red Devil RX 6600 XT even if it is sold even at AMD’s inflated SEP pricing because the competing cards are mostly unavailable for even double their MSRP. Do not reward scalpers or etailers who sell at inflated prices and who do not deserve our business. We can outwait them.

Stay tuned, there is much more coming from BTR. This weekend we will return to VR with a performance evaluation comparing the Red Devil RX 6600 XT with the RTX 3060. And stay tuned for Rodrigo’s upcoming 471.68 driver performance analysis!

Happy Gaming!

]]>
64GB T-FORCE ZEUS 3200MHz SO-DIMM DDR4 – Turning a basic notebook into a workstation? https://babeltechreviews.com/64gb-t-force-zeus-3200mhz-so-dimm-ddr4-turning-a-basic-notebook-into-a-workstation/ Mon, 14 Jun 2021 20:06:03 +0000 /?p=23797 Read more]]> T-FORCE ZEUS Notebook SO-DIMM 3200MHz DDR4 2x32GB Kit Memory Review – Turning a basic 12GB Notebook into a Workstation?

T-FORCE ZEUS memory is a fast high-capacity 32GBx2 gaming SO-DIMM 3200MHz DDR4 notebook memory kit for $289.99 that we received from TeamGroup to see if 64GB brings anything extra for notebook users over 12GB of system memory. We want to see if a storage and memory kit upgrade can bring extra performance to our basic $599 budget 1080P 17″ HP by2053cl notebook. Is it possible to turn a budget notebook into a workstation?

BTR currently uses a very basic 1080P HP 17.3″ notebook with an Intel i5-10210U, a 1TB Hard Drive, and 12GB 2666MHz DDR4 that sells on Amazon for $699, $100 more than we paid for it at Costco last year. The i5-10210 is a quad-core 10th generation i5 Intel CPU with hyperthreading and it is a capable mobile CPU with a 1.6GHz base frequency that Turbos up to 4.2 GHz with 6 MB L3 cache. The HP notebook is sold inexpensively since it comes standard with a painfully slow 1TB 5400 rpm HDD and a barely acceptable 12GB (8GB+4GB) system RAM.

For years, 16GB RAM has been considered the optimum capacity for high-end desktop PC gaming and 8GB was considered sufficient for notebook PCs. We have found that 8GB is no longer ideal as we experienced slowdowns with our old notebook as it was used for all of our writing, office, photoshop, EXCEL, Word, presentation, and Internet needs including using WordPress to write BTR reviews. Our 10-year old Dell Workstation notebook needed replacement as it was literally falling apart so we purchased a budget $599 17″ HP by2053cl notebook after doing our research.

Since our HDD based HP notebook took about 2 minutes to set up Windows, the first thing we did was to add a 480GB Kingston A-1000 NVMe2 SSD which became our main boot drive, and the 1TB 5400 rpm HDD was relegated to storage. Now our notebook sets up Windows 10 in just a few seconds.

The standard 12GB memory configuration for our HP notebook consists of 8GB Kingston DDR4-2666MHz and 4GB Micron DDR4-3200MHz both running at 2666MHz. 12GB of system RAM is a solid upgrade over the 8GB of our old Dell PC and we rarely encounter slowdowns except when editing very large Photoshop images.

Recently some gamers appear to promote 32GB as the new 16GB as necessary for “future proofing”. It is true that if a gamer multitasks while gaming – perhaps content streaming or creation, downloading/uploading, or with memory intensive programs working in the background – then perhaps 16GB may not be enough. And extreme gamers who are aiming for photorealism by modding games using ultra high textures may need more than 32GB of RAM. However, our notebook uses Intel CPU integrated graphics and is unsuitable for playing modern games at 1080P, so we will focus on office tasks including using Photoshop, Word, EXCEL, and Internet browsing, as well as on light workstation tasks.

From our testing with Ivy Bridge, Haswell, Skylake, Coffee and Comet Lake platforms, using fast DDR over slower DDR brings only limited performance improvements for a few CPU-dependent games. However, we found that using faster memory results in extra overall performance gains for many other tasks and applications. Unfortunately, HP does not allow faster memory in our notebook’s BIOS, so we had to run our ZEUS SO-DIMM 3200MHz at 2666MHz. If we had picked the memory, we would have chosen a ZEUS 2666MHz kit which is a little less expensive than the 3200MHz kit.

Testing Platform and ZEUS SO-DIMM DDR4 Notebook Memory Specifications

Our testing platform is a very recent clean installation of Windows 10 Pro 64-bit using our 1080P 17″ HP by2053cl notebook with the Kingston A-1000 NVMe2 SSD as primary C: drive. The settings, benchmarks, testing conditions, and hardware are identical except for the two DDR4 kits being compared – the 64GB ZEUS DDR4 3200MHz and the 12GB (8GB Kingston + 4GB Micron) mixed memory kit. All DDR speeds are locked by the HP’s BIOS to 2666MHz.

Here are the ZEUS DDR4 SO-DIMM specifications from TeamGroup’s website.

Source: TeamGroup

The ZEUS DDR4 defaults to 2666MHz in the HP notebook’s BIOS and the timings are set almost identically to the mixed memory at 19-19-19-44.

We will compare the performance of both DDR4 kits – the 12GB Mixed Memory at 2666MHz and the ZEUS 64GB at 2666MHz to chart the effects of high capacity memory on the performance of one modern game benchmark’s 5 loading levels, Final Fantasy XIV Stormbringers, at 1920×1080 resolution. We also benchmark using many of the recognized memory and CPU related benchmarking tools including AIDA64, SANDRA, RealBench, PCMark 8 and 10, Cinebench, Novabench, and Workstation3 SPEC.

Team Group offers a lifetime warranty for their T-FORCE desktop and notebook memory.

Let’s unbox the memory kit on the next page and take a closer look.

Unboxing

The T-FORCE ZEUS SO-DIMM DDR4 3200MHz 2x32GB memory kit comes in a anti-static blister pack with a card that advertises its features.

The T-FORCE logo uses a stylized hawk symbolizing a gamer’s independent spirit of flying free. The product card explains the lightening bolt design as being something that Zeus would choose, and that it is “Born for Gaming”.

After we removed the memory out of the anti-static blister pack, we placed it next to the rest of the contents.

The installation guide is illustrated and it is easy to install notebook memory once the notebook is opened up.

The ZEUS SO-DIMM DDR4 is good looking and it is rather unfortunate that it is hidden inside most notebooks. It uses Hynix RAM modules.

There is plenty of room in most notebooks that can be upgraded as SO-DIMMS are standardized. We opened up the back of our HP notebook PC by removing all the screws and then prying the innards out of the bottom shell.

Part of the reason that this notebook is cheap and unpopular is because of its HDD slowness even though it has a capable CPU with 12GB of installed memory. Originally, Windows 10 took over two minutes to fully set up. However, just installing a 480GB Kingston A-1000 SSD turns setting up into a few seconds.

This HP notebook comes with 12GB of DDR4 – 8GB Kingston and 4GB Micron both running at 2666MHz. The HP BIOS is very limited and all faster DDR4 is limited to 2666MHz – we cannot take full advantage of ZEUS running at its rated 3200MHz although its timings are set faster at 2666MHz.

We installed the ZEUS SO-DIMMs, closed our HP’s case, tightened up screws and started it up with 64GB of system memory and a SSD that we had cloned from the 1TB HDD.

Before we check to see if there are performance increases from using higher capacity system RAM, let’s look at our test configuration.

Test Configuration – Hardware

  • 17″ HP by2053cl notebook F.59 BIOS/Latest Drivers
  • Intel i5-10210U (HyperThreading and Turbo boost are on for 1.6 GHz base frequency, up to 4.2 GHz with Intel Turbo Boost Technology, 6 MB L3 cache, 4 cores).
  • 480GB Kingston A-1000 NVMe PCIe SSD
  • T-FORCE ZEUS PC4 25600 DDR4 3200MHz CL22 2x32GB kit underclocked to PC 21300 DDR4 2666MHz CL19
  • 12GB Mixed Memory – 8GB Kingston DDR4-2666MHz and 4GB Micron DDR4-3200MHz both running at 2666MHz CL19

Test Configuration – Software

  • Windows 10 64-bit Pro edition fully updated – 21H1 (Build1 9043.1023)
  • Latest DirectX
  • CPU-Z
  • MemTest64
  • Windows Memory Diagnostics
  • SiSoft Sandra 2021
  • AIDA64
  • PCMark 8 (Creativity Suite)
  • PCMark 10 (Extended)
  • RealBench
  • Cinebench R23
  • Novabench
  • SPECWorkstation3

PC Game

  • Final Fantasy XIV: Shadowbringers – 5 levels

Let’s head to our benching results.

Benchmarking

Individual chart results are always listed in order: 1) Mixed memory 12GB and 2) ZEUS SO-DIMM 64GB. All of the charts refer to 2666MHz as 2660MHz (as typos).

Synthetic Benches

SiSoft Sandra 2020

To see where memory performance results differ, and there is no better tool than SiSoft’s SANDRA 2020. SiSoftware Sandra (the System ANalyser, Diagnostic and Reporting Assistant) is an complete information & diagnostic utility in a complete package. It is able to provide all the information about your hardware, software and other devices for diagnosis and for benchmarking. Sandra is derived from a Greek name that implies “defender” or “helper”.

There are several versions of Sandra, including a free version of Sandra Lite that anyone can download and use. It is highly recommended! SiSoft’s Sandra 20/20/8(2020/R8t – v.30.61) is the very latest version, and we are using the full engineer suite courtesy of SiSoft. The latest version features multiple improvements over earlier versions of Sandra. It will benchmark and analyze all of the important PC subsystems and even rank your PC and give recommendations for improvement.

We run the SANDRA memory intensive benchmark tests. Here is the chart summarizing the results of our memory speed testing.

Memory bandwidth is significantly higher using the ZEUS 64GB DDR4 over the 12GB mixed memory.

We next feature AIDA64.

AIDA64 v6.00

AIDA64 as the successor to Everest is an important industry tool for benchmarkers. Its memory bandwidth benchmarks (Memory Read, Memory Write, and Memory Copy) measure the maximum available memory data transfer bandwidth. AIDA64’s benchmark code methods are written in Assembly language, and they are extremely optimized for every popular AMD, Intel and VIA processor core variants by utilizing the appropriate instruction set extensions. We use the Engineer’s full version of AIDA64 courtesy of FinalWire. AIDA64 is free to to try and use for 30 days.

The AIDA64 Memory Latency benchmark measures the typical delay from when the CPU reads data from system memory. Memory latency time means the time is accurately measured from the issuing of the read command until the data arrives to the integer registers of the CPU. It also tests Memory Read, Write, and Copy speeds besides Cache.

The mixed 12GB DDR4 is first.

Next we bench the ZEUS 64GB DDR4 memory note an overall large increase in memory bandwidth.

Here is the summary chart of the main four AIDA64 memory benchmarks.

Just like with Sandra 20/20, the ZEUS 64GB kit’s memory bandwidth is significantly higher than the 12GB of mixed memory.

Let’s look at PCMark 8 next to see if its benchmarks can reflect memory capacity increases.

PCMark 8

PCMark 8 has a Creative test which uses real world timed benchmarks including web browsing, video group chat, photo, batch, and video editing, music and video tests, and even mainstream gaming. Since the PCMark 8 Storage Test does not test the CPU, there is no performance difference from increasing memory clock speeds so we used the Creative benchmark suite.

The mixed 12GB DDR4 is benchmarked first.

Next we bench the ZEUS 64GB DDR4 memory kit.

There isn’t much difference in the overall scores or individual scores. 64GB DDR4 doesn’t perform faster than 12GB DDR4 at the same memory clocks for the common tasks that PCMark 8 benchmarks.

PCMark 10 is next.

PCMark 10

PCMark 10 benching suite is the follow-up to PCMark 8 and it also uses real world timed benchmarks which include web browsing, video group chat, photo, batch, and video editing, music and video tests, and even mainstream gaming. The PCMark 10 test offers two primary tests and we chose the extended version.

The mixed 12GB DDR4 is benchmarked first with a score of 2572.

Next we bench the ZEUS 64GB DDR4 memory with a score of 2593.

Here is the PCMark 10 summary chart:

The PCMark 10 overall results show a tiny increase using the ZEUS 64GB kit over the 12GB of mixed memory. System memory capacity has no real effect on the scores of these common tasks benchmarked.

Let’s look at our next synthetic test, RealBench.

RealBench v2.56

RealBench is a benchmarking utility by ASUS Republic of Gamers which benchmarks image editing, encoding, OpenCL, Heavy Multitasking, and gives out an overall score for easy comparison off or online. Some of these tests are affected by CPU and memory speeds.

The mixed 12GB DDR4 is up first and scores 33,133.

Next we upgrade from 12GB to 64GB of ZEUS DDR4 and score 36,582.

Here are the individual tests summarized.

Just like with PCMark, the individual results are inconclusive but the scores generally increased with the higher memory capacity and increased bandwidth.

Next we benchmark using Cinebench.

Cinebench

CINEBENCH is based on MAXON’s professional 3D content creation suite, CINEMA 4D. This latest R20.0 version of CINEBENCH can test up to 64 processor threads accurately and automatically. It is an excellent tool to compare both CPU/memory performance. We are going to focus only on the CPU, and higher is always better.

The 12GB mixed DDR4 is first with 2883 Multi-core and 1080 Single core points.

Next we benchmark Cinebench with 64GB ZEUS DDR4 with 2923 Multi-core and 1073 Single core points.

There is very little difference between the scores as shown by the chart summarizing the Cinebench runs.

Next up, Novabench.

Novabench

Novabench is a very fast benching utility that gives out a memory score showing the overall bandwidth.

The mixed 12GB DDR4 is first with 21674MB/s.Next we install the 16GB ZEUS 3200MHz memory and note an overall bandwidth increase to 25847MB/s.

Here are the Novabench memory scores summarized in a chart.

The Novabench results seem to fall in-line with the other synthetic benchmarking suites by showing the bandwidth increasing by using the ZEUS higher capacity RAM.

We used to benchmark 7zip separately as a memory test, but now we use SPECWorkstation3 CPU benchmarks which includes it and many others.

SPECworkstation3 (3.0.4) Disk Benchmarks

All the SPECworkstation3 benchmarks are based on professional applications, most of which are in the CAD/CAM or media and entertainment fields. All of these benchmarks are free except for vendors of computer-related products and/or services. The most comprehensive workstation benchmark is SPECworkstation3. Specworkstation3 is a free-standing benchmark which does not require ancillary software. It measures GPU, CPU, storage and all other major aspects of workstation performance based on actual applications and representative workloads. SPECworkstation CPU benchmarks are perhaps more demanding than 3DMark tests.

We only tested CPU-related SPEC workstation performance which includes multiple tests like 7-Zip, Python36, Handbrake, and LuxRender.
Here are our 12GB mixed memory SPECworkstation storage 3.0.4 summary and raw scores.
Next up are the ZEUS 64GB DDR4 summary and raw scores.
Here are the SPECworkstation3 CPU results summarized in a chart. Higher is better since the results are expressed by scores.
Using SPECworkstation3 benchmarks, we see the ZEUS 64GB memory kit generally score higher by virtue of its higher bandwidth than the 12GB of mixed memory.

Next we look at game/level loading speeds.

The Game/Level Loading Timed Results

Game and game level loading time results are difficult to measure precisely so we used the Final Fantasy XIV: Shadowbringers benchmark that measures the loading time of five scenes and which also gives an average framerate.

Next we test using the 64GB ZEUS

For once, we see a real benefit in having higher capacity system RAM in this benchmark. Shadowbringers’ five benchmarked levels not only load faster, its average framerate is higher using the 64GB ZEUS SO-DIMM kit over 12GB of mixed memory at the same speeds.

The purpose of higher capacity RAM is not for gaming but for workstations and professional applications. Content creators, professional video and image editors, programmers, CAD, and other design software power users will benefit from having more RAM, especially in a workstation situation. Running out of system RAM drastically slows up projects as the PC than has to swap memory to disk, and RAM is always faster than the fastest SSD. Let’s take image processing as one example.

Photoshop image processing uses a lot of RAM since working with just one image may take hundreds of MB. And there may be dozens of versions of the same image in different stages that all need to be processed in parallel. Working with multiple images – each with multiple versions in stages requiring parallel processing – takes many gigabytes of RAM to keep all of the image processing in the RAM memory, and using 32GB RAM (or more) may be considered useful and usual.

Since RAM is significantly faster than disk – perhaps 10X faster than a SSD – active projects should always be able to entirely fit into system memory. Multiple images that may take minutes to process with 32GB or 64GB RAM may take hours to write to disk in an 8GB, 12GB, or 16GB memory system because of the high volume of data. But for gamers, there isn’t a lot of need for more than 16GB of system memory. This may well change with the next generation of PC consoles, but “future proofing” now doesn’t make a lot of sense as DDR5 will likely be popular when memory-hogging new console ports to PC are released.Few situations find 12GB of system RAM insufficient for our daily tasks although our old Dell notebook struggled with 8GB at times. Using 12GB as above, we see multiple tabs open in Chrome and in the Edge, as well as photos open, Photoshop Elements, EXCEL, Word, and multiple other programs running simultaneously as is normal for us. 37% of our CPU is being used as well as 71% of the system memory. Just leaving multiple Chrome tabs open will result in memory leakage and we have encountered slowdowns with multiple tabs open.

Below we see a similar situation but with ZEUS 64GB system RAM. Now we see only 16% of our CPU and only 14% of our system memory are being used. We now have a lot more headroom to do multiple tasks simultaneously.

There are also instances where a pure gaming PC may want to have more than 16GB of system RAM. Modders who aim for photo realistic games by using super high resolution textures may need more than 16GB. Streamers will generally want more than 16GB, and VR gamers may find that by using a Wireless adapter, 5GB-6GB of memory may be used while just idling on the desktop! But for our notebook – which will never be a real workstation – 16GB would be plenty, 32GB overkill, and 64GB totally unnecessary.

The only advantage we can see is that if you need a super-fast virtual drive, then a RAM disk or drive is the way to go using ImDisk Toolkit. It is freeware that is very easy to use and it offers many custom options. Although it is fast, we already tested a 48GB RAM drive and found it useless for gaming.

RAM is volatile memory. Although you can work with and save files to your RAM drive, if your computer crashes or suffers a power loss, those saved files are forever lost beyond any chance of recovery. You must leave your PC on until you copy the data you want from your RAM disk to your SSD or HDD to save it.

Just like before, we give an overall thumbs down to RAM drive creation as an advantage for gamers with 64GB RAM, except perhaps for certain applications or games.

Next, the gaming benchmarks and the summary charts followed by our conclusion.

Game Performance Results, Summary Charts, & Conclusion

Below are the Summary charts in one location.

Let’s head for the conclusion.

Conclusion and Verdict

We note the importance of more than 16GB of system RAM for workstation and professional applications. However, there is no way that we can recommend 32GB or 64GB of system RAM to either a desktop or notebook gamer who only uses his PC for gaming and typical office/light Photoshop/Internet browsing tasks. We simply have not stressed our 64GB of system memory by using typical benchmarking suites to show any major performance differences. If a gamer wants more than 16GB of system memory as a multitasker or a modder, than 32GB is a good option. 64GB is overkill for a desktop and doubly so for most notebook users.

Faster RAM benefits are mostly shown by memory intensive benchmarks and they will mostly translate to better productivity outside of gaming. If you are a gamer, buy faster memory and pick ZEUS SO-DIMM 2x8GB ($89.99) for gaming or 2x16GB ($159.99) if you multitask while gaming. So far, 32GB is not the ‘new 16GB’ for gaming. It doesn’t make a lot of sense for gamers to buy extra DDR4 now for ‘future proofing’ when DDR5 is not that far off. And there is no advantage for us to use 64GB of system memory.

We cannot turn our basic notebook into a true workstation by simply upgrading the storage and the memory capacity – at least, not without running an eGPU – but that is for another review. However, we did take a slow budget $599 notebook and turn it into a very capable and fast portable work PC with no fear that we will run out of memory.

T-FORCE ZEUS SO-DIMM 2x32GB Notebook DDR4 3200MHz Kit

Pros

  • The T-FORCE ZEUS 2x32GB SO-DIMM 3200MHz DDR4 kit may be used for professional applications, workstation, creative, as well as for multi-taskers and used without any penalty compared with a lower capacity in a gaming PC
  • 3200MHz is fast memory for notebook gaming
  • Our ZEUS SO-DIMM review sample is fast, stable, and it is equipped to handle any workstation (or gaming) notebook uses
  • The ZEUS SO-DIMM DDR4 is competitively priced and it comes with a lifetime Team Group warranty

Cons

  • Price. 64GB is expensive except for true workstation uses. Buy the 16GB or 32GB kit instead for most notebook gamers/multi-taskers/power users.

The Verdict

If you are a notebook gamer with high quality components and who wants great performance plus you need to work with RAM-intensive applications or multi-task, then the T-FORCE ZEUS SO-DIMM DDR4 3200MHz 2x32GB kit is an excellent memory-intensive workstation choice.

We feel that the T-FORCE ZEUS SO-DIMM DDR4 3200MHz 2x32GB kit is ideal for notebook workstation gamers who run memory intensive applications and want 3200MHz. And ZEUS is BTR’s choice for our own work notebook although we would have saved a lot of money and instead picked ZEUS SO-DIMM 2x16GB DDR 2666MHz with no performance disadvantages.

Gamers who play modded games using ultra high textures may need 32GB of RAM or more. Gamers who stream while gaming or otherwise multitask will benefit from 32GB. VR gamers may also benefit. But for a pure gaming desktop or notebook PC, 16GB is plenty for now and TeamGroup has many choices at good prices. 32GB is not the new 16GB and 64GB is overkill for gamers and most PC users.

Memory prices change daily so we suggest checking for sales to get the best bang for buck.

Next up, we compare the performance of the Vive Pro 2 with the Reverb G2 and with the Valve Index!

Happy Gaming!

]]>
The RTX 3070 Ti Launch Review Featuring the Vive Pro 2 https://babeltechreviews.com/the-rtx-3070-ti-launch-review-featuring-the-vive-pro-2/ https://babeltechreviews.com/the-rtx-3070-ti-launch-review-featuring-the-vive-pro-2/#comments Wed, 09 Jun 2021 12:56:19 +0000 /?p=23666 Read more]]> The RTX 3070 Ti Arrives at $599 – 25 Pancakes Games, Vive Pro 2 VR Performance, and GPGPU Benchmarks

BTR received the RTX 3070 Ti 8GB Founders Edition (FE) from NVIDIA and we have been testing its performance by benchmarking 25 games and five VR games using the new Vive Pro 2, and also by overclocking it with an emphasis on ray tracing and DLSS. Although the RTX 3070 Ti is a gaming card, we have added workstation, SPEC, and GPGPU benches. Although we feature the Vive Pro 2 to see if a RTX 2080 Ti / RTX 3070/Ti class of card can power its extreme resolution, this is not a review of the new headset yet.

We are going to compare performance using eight top cards to see where the RTX 3070 Ti FE fits in – the RTX 3070 Ti, 3080 Ti FE, the RTX 3090 FE, the RTX 3080 FE, as well as versus the reference RX 6800, RX 6800 XT, and the Red Devil RTX 6900 XT. However, because of supply/demand issues, all suggested pricing is meaningless as only a very lucky few gamers will get them at or close to MSRP/SEP.

NVIDIA indicates that the RTX 3070 Ti has been in full production and stockpiled for over a month, so they are already in the hands of retailers and have been there for weeks so they can build supply. Even so it will still sell out probably within a few minutes or less because the demand is incredibly high. Fortunately, the end of the COVID-19 and Crypto pandemics are in view and a new ‘Roaring 20s’ for gamers may soon appear on the horizon with lower prices and better availability by the Autumn.

Specifications

We have already covered Ampere’s features in depth and we have reviewed the RTX 3070, the 3080 Ti’s $499 slower brother that comes equipped with 8GB of GDDR6 vRAM. The RTX 3070 Ti is a GDDR6X upgrade over the RTX 3070. Besides its faster memory, the 3070 Ti also has more CUDA Cores and slightly higher clock speeds, as well as a flow-through cooler design similar to the RTX 3080/3080 Ti/3090.

This review will consider whether the new RTX 3070 Ti FE at $599 – $100 more than the RTX 3070 – delivers a good value. Below are the specifications comparing the RTX 3070 Ti with the RTX 2070 as well as with the RTX 3070.

Source: NVIDIA

Since the RTX 2080 Ti launched in 2018, there are now more than 130 games and applications supporting NVIDIA’s RTX tech including ray tracing and Deep Learning Super Sampling (DLSS). Since all of the vendors and console platforms now support ray tracing technology, we will focus on these newer games. NVIDIA’s Reflex latency-reducing technology is also now supported in 12 of the top 15 competitive shooters and we will follow up this review with an upcoming latency review.

We benchmark using Windows 10 64-bit Pro Edition at 1920×1080, 2560×1440, and at 3840×2160 using Intel’s Core i9-10900K at 5.1/5.0 GHz and 32GB of T-FORCE DARK Z 3600MHz DDR4 on a EVGA Z490 FTW motherboard. All games and benchmarks use the latest versions, and we use the most recent drivers.

Let’s first unbox the RTX 3070 Ti Founders Edition before we look at our test configuration

The RTX 3080 Ti Founders Edition Unboxing

The Ampere generation RTX 3070 Ti Founders Edition is also a completely redesigned Founders Edition and here is the card, unboxed.

Inside the box and beneath the card are warnings, a quick start guide and warranty information, plus the 12-pin to PCIe dual 8-pin dongle that will be required to connect the RTX 3070 Ti to most PSUs.

Just like the other Ampere Founders Editions, the RTX 3070 Ti comes in a “shoebox” style box where the card inside lays flat at a slight incline for display.

The system requirements, contents, and warranty information are printed on the bottom of each box. The RTX 3070 Ti requires a minimum 750W power supply unit, and the case must have space for a 267mm (L) x 112mm (W) two-slot card.

It easily fits in our Phanteks Eclipse P400 ATX mid-tower as it is much smaller than the RTX 3090 and slightly smaller than the RTX 3080 Ti.

The RTX 3070 Ti Founders Edition is a moderately heavy 2-slot card with dual fans. As a GDDR6X upgrade over the RTX 3070, the 3070 Ti also has more CUDA Cores and slightly higher clock speeds, as well as the flow-through cooler design similar to the RTX 3080/3080 Ti/3090.

Turning the card over, we see the similar unique design of the top Ampere FEs with the flow-through cooler. This card is designed to keep the GPU cool partly by using a short PCB, and inside the card it is mostly all heatsink fins.

There is very large surface area for cooling so the heat is readily transferred to the fin stack and the dual fans exhaust the heat out of the back of the case and also from the top of the card into the case’s airflow.

The IO panel has a very large air vent and four connectors. The connectors are similar to the Founders Edition of the RTX 2080 Ti and the RTX 3080, but the VirtualLink connector for VR is no longer used. Three DisplayPort 1.4 connectors are included, and the HDMI port has been upgraded from 2.0 to 2.1 allowing for 4K/120Hz over a single HDMI cable.

Before we look at overclocking, power and noise, let’s check out our test configuration.

Test Configuration

Test Configuration – Hardware

  • Intel Core i9-10900K (HyperThreading/Turbo boost On; All cores overclocked to 5.1GHz/5.0Ghz. Comet Lake DX11 CPU graphics)
  • EVGA Z490 FTW motherboard (Intel Z490 chipset, v1.3 BIOS, PCIe 3.0/3.1/3.2 specification, CrossFire/SLI 8x+8x), supplied by EVGA
  • T-FORCE DARK Z 32GB DDR4 (2x16GB, dual channel at 3600MHz), supplied by Team Group
  • RTX 3070 Ti Founders Edition 8GB, stock and overclocked, on loan from NVIDIA
  • RTX 3080 Ti Founders Edition 12GB, stock and overclocked, on loan from NVIDIA
  • RTX 3090 Founders Edition 24GB, stock clocks, on loan from NVIDIA
  • RTX 3070 Founders Edition 8GB, stock clocks, on loan from NVIDIA
  • RTX 3070 Ti Founders Edition 8GB, stock and overclocked, on loan from NVIDIA
  • Radeon RX 6800 16GB reference version, stock clocks on loan from AMD
  • Radeon RX 6800 XT 16GB reference version, stock clocks on loan from AMD
  • Red Devil RX 6900 XT 16GB, at Red Devil clocks, loaned by PowerColor and returned in April.
  • VIVE PRO 2, on a short-term loan from HTC/VIVE
  • 1TB Team Group MP33 NVMe2 PCIe SSD for C: drive
  • 1.92TB San Disk enterprise class SATA III SSD (storage)
  • 2TB Micron 1100 SATA III SSD (storage)
  • 1TB Team Group GX2 SATA III SSD (storage)
  • 500GB T-FORCE Vulcan SSD (storage), supplied by Team Group
  • ANTEC HCG1000 Extreme, 1000W gold power supply unit
  • BenQ EW3270U 32″ 4K HDR 60Hz FreeSync monitor
  • Samsung G7 Odyssey (LC27G75TQSNXZA) 27? 2560×1440/240Hz/1ms/G-SYNC/HDR600 monitor
  • DEEPCOOL Castle 360EX AIO 360mm liquid CPU cooler
  • Phanteks Eclipse P400 ATX mid-tower (plus 1 Noctua 140mm fan) – All benchmarking and overclocking performed with the case closed

Test Configuration – Software

  • GeForce 466.47 for (RTX 3080 Ti Press launch drivers) are used for all GeForce cards except for the RTX 3070 Ti and RTX 3070 which use the new card’s press launch drivers – 466.61.
  • Adrenalin 21.5.2 drivers used for the RX 6800 and the RX 6800 XT and 21.3.2 is used for the RX 6900 XT.
  • High Quality, prefer maximum performance, single display, set in the NVIDIA control panel.
  • VSync is off in the control panel and disabled for each game
  • AA enabled as noted in games; all in-game settings are specified with 16xAF always applied
  • Highest quality sound (stereo) used in all games
  • All games have been patched to their latest versions
  • Gaming results show average frame rates in bold including minimum frame rates shown on the chart next to the averages in a smaller italics font where higher is better. Games benched with OCAT show average framerates but the minimums are expressed by frametimes (99th-percentile) in ms where lower numbers are better.
  • Windows 10 64-bit Pro edition; latest updates 21H1 (Build1 9043.1023). DX11 titles are run under the DX11 render path. DX12 titles are generally run under DX12, and multiple games use the Vulkan API.
  • Latest DirectX
  • MSI’s Afterburner, 4.6.4 beta to overclock the RTX 3070 Ti
  • FCAT VR
  • fpsVR
  • OpenVR Benchmark

Games

Vulkan

  • DOOM Eternal
  • Red Dead Redemption 2
  • Ghost Recon: Breakpoint
  • World War Z
  • Rainbow 6 Siege

DX12

  • Resident Evil VIllage
  • Metro Exodus – Enhanced Edition & regular edition
  • Hitman 3
  • Cyberpunk 2077
  • DiRT 5
  • Godfall
  • Call of Duty Black Ops Cold War
  • Assassins Creed Valhala
  • Watch Dogs Legions
  • Horizon Zero Dawn
  • Death Stranding
  • F1 2020
  • Borderlands 3
  • Civilization VI – Gathering Storm Expansion
  • Battlefield V
  • Shadow of the Tomb Raider

DX11

  • Days Gone
  • Crysis Remastered
  • Destiny 2 Shadowkeep
  • Total War: Three Kingdoms

VR Games

  • Assetto Corsa Competizione
  • Elite Dangerous
  • No Man’s Sky
  • Project CARS 2
  • Skyrim

Synthetic

  • TimeSpy (DX12)
  • 3DMark FireStrike – Ultra & Extreme
  • Superposition
  • Heaven 4.0 benchmark
  • AIDA64 GPGPU benchmarks
  • Blender 2.92 benchmark
  • Sandra 2020/21 GPGPU Benchmarks
  • SPECworkstation3
  • SPECviewperf 2020
  • Octane benchmark

NVIDIA Control Panel settings

Here are the NVIDIA Control Panel settings. AMD Adrenalin Control Center Settings

All AMD settings are set so that all optimizations are off, Vsync is forced off, Texture filtering is set to High, and Tessellation uses application settings. Navi cards are quite capable of high Tessellation unlike earlier generations of Radeons.

Anisotropic Filtering is disabled by default but we always use 16X for all game benchmarks.

Let’s check out overclocking, temperatures and noise next.

Overclocking, Temperatures & Noise

All of our performance and overclocked testing are performed in a closed Phanteks Eclipse P400 ATX mid-tower case. Inside, the RTX 3070 Ti is a quiet card even when overclocked and we never needed to increase its fan speeds manually or change the stock fan profile. We overclocked using Afterburner without adding any extra voltage.

We used Heaven 4.0 running in a window at completely maxed-out settings at a windowed 2560×1440 to load the GPU to 98% so we could observe the running characteristics of the RTX 3070 Ti and also to be able to instantly compare our changed clock settings with their results. At completely stock settings with the GPU under full load, the card ran cool and stayed below 85C with clocks that averaged around 1850MHz.

Simply raising the Power and Temperatures to their maximums resulted in the clocks running above 1875MHz with a small rise in temperatures using the stock fan profile.

After testing multiple combinations, our RTX 3070 Ti’s final stable overclock to achieve the highest overall performance added +150MHz offset to the core and +800 MHz to the memory. to achieve a core clock above 2000MHz with a memory clock of 10300MHz. The RTX 3070 Ti FE is power-limited, and to achieve a higher overclock will require more voltage.

Although we were unable to spend a lot of time overclocking it, our review sample appears to be only a fair overclocker. If you want a higher overclock, pick a partner overclocked AIB RTX 3070 Ti. To see the performance increase from overclocking, we tested 5 games. The results are given after the main performance charts in the next section.

First, let’s check out performance on the next page.

Performance Summary Charts & Graphs

Gaming Performance Summary Charts

Here are the summary charts of 25 games and 3 synthetic tests. The highest settings were always chosen and the settings are listed on the chart. The benches were run at 1920×1080, 2560×1440 and at 3840×2160. Five cards were compared and they are listed in order starting from left to right with the RTX 3070 FE, the reference RX 6800, the RTX 3070 Ti, the RX 6800 XT, the RTX 3080 FE, the RTX 3080 Ti FE, the RTX 3090 FE, and the Red Devil RX 6900 XT (which was benchmarked in April).

Most results, except for synthetic scores, show average framerates, and higher is better. Minimum framerates are next to the averages in italics and in a slightly smaller font. Games benched with OCAT show average framerates, but the minimums are expressed by frametimes (99th-percentile) in ms where lower are better. Performance wins between the RTX 3070 Ti and the RX 6800 are given in yellow text.

Please click on each chart to open a pop-up window for its best viewing experience.

Although there is some game-dependent variability, the RTX 3070 Ti FE is only around 3-10% faster than the RTX 3070 FE but it is enough to now easily trade blows with the reference RX 6800 in rasterized games, winning more than it loses, and is much faster in most ray traced games and a lot faster when DLSS is used.

Next we look at overclocked performance.

Overclocked benchmarks

These benchmarks are run with the RTX 3070 Ti overclocked +150MHz on the core and +800MHz on the memory versus at stock clocks. The RTX 3070 Ti overclocked results are presented first and the stock results are shown in the second column.

There is a small performance increase from overclocking the RTX 3070 Ti Founders Edition. Unfortunately, although we did not have enough time to optimize our overclock, it’s clear that NVIDIA has locked down Ampere cards’ overclocking in an attempt to maximize performance for all Founders Edition gamers. We would also suggest that the RTX 3070 Ti FE is rather voltage constrained and if you want a higher overclock, pick a factory-overclocked partner version instead of a Founders Edition.

Let’s next look at VR gaming with the Vive Pro 2. The following is not our review of the Vive Pro 2 – the full review will follow next week. Instead we are going to focus on performance.

VR Gaming with the Vive Pro 2

The Vive Pro 2 is a much more demanding headset than the Vive Pro or the Valve Index by virtue of its higher resolution. Image resolution has been increased per eye from the Pro’s (or Valve Index’) 1440 x 1600 to 2448 x 2448. This higher resolution gives it exceptional clarity with no screen door effect, but it is also demanding on video cards. By default at the Ultra or Extreme preset, the Vive console uses 150% SteamVR Render Resolution for the Vive Pro 2 which appeared to be set to 2748×2748 per eye for high end NVIDIA cards at the time we benchmarked our games.

Here is the OpenVR benchmark result which requires 100% SteamVR Render Resolution for its default run. We used the Vive Console Ultra setting at native resolution and 90Hz. We did not test the Extreme setting which allows up to 120Hz.

Although SteamVR sets the same resolution for the RTX 3090 and the RTX 3070 Ti, it uses a lower resolution for AMD cards at either 100% (2244×2244) or at 150%. In fact, yesterday’s Vive software update lowered the default SteamVR resolution slightly for NVIDIA cards which suggests that it is still a work in progress and is being fine-tuned. The 100% SteamVR render resolution was lowered from 2556×2556 to 2532×2532 yesterday. Our results reflect the higher render setting.Some VR gamers prefer to lower the SteamVR Render Resolution which is set at 150% and is mostly used to compensate for the lens’ distortion instead of lowering a game’s preset or by dropping individual settings. We decided to initially test at 100% which is what we test the Reverb G2, the Vive Pro, and the Valve Index. Our follow up review will also benchmark at the default 150% resolution.

Yesterday, in response to our questions, Vive suggested that the SteamVR default Render Resolution should be left at 150%. Vive told BTR:

“Motion Compensation is the same as Motion Smoothing. The new lens and display requires our own motion compensation, and VIVE Console is the software that is driving the displays, so motion compensation is built into that.

For VIVE Pro 2, we set Steam’s supersampling setting as 150% by default, which makes up for the lens distortion. We found this to be the best value for SteamVR’s automatic performance scaling to scale and still reach 90 or 120 Hz on the majority of PCs we expect to be used to run VIVE Pro 2. However, users can still go into SteamVR to manually adjust their supersampling settings.

If we had set it to 100%, a lot of PCs would struggle under automatic settings. Render resolution is set by SteamVR and automatically scales to what it thinks is best for your system, VIVE Console handles display resolution.”

Motion Smoothing is disabled in SteamVR, but we actually didn’t see any FPS performance difference disabling or enabling Motion Compensation in the Vive console using fpsVR although the frametimes suffered. We see relatively minor visual differences between 100% and 150% SteamVR Render Resolution but even at the higher setting, lens distortion is still slightly visible to us particularly at the edges of the display.

At 50% SteamVR Render Resolution, there is a clear degradation of visuals which indicates that the SteamVR Render Resolution is working properly. However, at 150% Super Resolution, the frametime rates go up (which is bad) for several games that we tested although the FPS remain at 45 FPS which suggested to us that Vive’s Motion Compensation may still be on although Vive assures us it can be switched off in their console. We noticed that Motion Compensation artifacting became prominent and even disturbing if settings are pushed too high as we found with Elite Dangerous.

Please note that FCAT VR doesn’t distinguish dropped frames from synthesized frames using the Pro 2 (or the Reverb G2) like it properly does for the Valve Index and the Vive Pro. We suggest that the vast majority of the frames reported as dropped are actually synthetically generated (reprojected) frames. It is likely that FCAT VR is not yet optimized for the Pro 2.

It is important to remember that BTR’s charts use frametimes in ms where lower is better, but we also compare “unconstrained framerates” which shows what a video card could deliver (headroom) if it wasn’t locked to either 90 FPS or to 45 FPS by the HMD. In the case of unconstrained FPS which measures just one important performance metric, faster is better.

Let’s individually look at our five VR games’ performance using FCAT VR. All of our games were benchmarked at 100% SteamVR resolution.

First up, Assetto Corsa Competizione.

Assetto Corsa Competizione (ACC)

BTR’s sim/racing editor, Sean Kaldahl created the replay benchmark run that we use for both the pancake game and the VR game. It is run at night with 20 cars, lots of geometry, and the lighting effects of the headlights, tail lights, and everything around the track looks spectacular.

Just like with Project CARS, you can save a replay after a race. Fortunately, the CPU usage is the same between a race and its replay so it is a reasonably accurate benchmark using the Circuit de Spa-Francorchamps.
iRacing may be more accurate or realistic, but Assetto Corsa Competizione has some appeal because it feels more real than many other racing sims. It delivers the sensation of handling a highly-tuned racing machine driven to its edge. We test using the VR Low preset.

VR Low

Here are the ACC frametimes using VR Low.

Here are the details are reported by FCAT-VR:

The RTX 3070 Ti delivered 102.85 unconstrained FPS with 15 dropped or synthesized frames and no Warp misses.

The RTX 3070 Ti has a little performance headroom and it is possible to play it using enhanced individual settings with minimal reprojected or synthesized frames but it is best suited for playing ACC on VR Low. VR High is unplayable.

Next, we check out Elite Dangerous.

Elite Dangerous (ED)

Elite Dangerous is a popular space sim built using the COBRA engine. It is hard to find a repeatable benchmark outside of the training missions.

A player will probably spend a lot of time piloting his space cruiser while completing a multitude of tasks as well as visiting space stations and orbiting a multitude of different planets (~400 billion). Elite Dangerous is also co-op and multiplayer with a very dedicated following of players.

We picked the Ultra Preset with the maximum FoV originally but the shimmering and artifacting from reprojection/Motion Compensation was awful, so we set everything to Medium leaving the FoV at maximum. Here is the frametime plot.

Here are the frametimes.

Here are the details as reported by FCAT-VR:

The RTX 3070 Ti delivered 128.79 unconstrained FPS with no Warp Misses nor any dropped or synthetic frames.

The experience playing Elite Dangerous at Ultra settings is awful but Medium seems perfect with some performance headroom to increase individual settings.

Next, we will check out a really demanding VR game, No Man’s Sky.

No Man’s Sky

No Man’s Sky is an action-adventure survival single and multiplayer game that emphasizes survival, exploration, fighting, and trading. It is set in a procedurally generated deterministic open universe, which includes over 18 quintillion unique planets using its own custom game engine.

The player takes the role of a Traveller in an uncharted universe by starting on a random planet with a damaged spacecraft equipped with only a jetpack-equipped exosuit and a versatile multi-tool that can also be used for defense. The player is encouraged to find resources to repair his spacecraft allowing for intra- and inter-planetary travel, and to interact with other players.

We set the settings to Enhanced which is above Low and below High, but we also set the anisotropic filtering to 16X and upgraded to FXAA+TAA. The game has recently implemented DLSS 2.1 and we used the highest visual quality preset, Quality which gives a much smaller performance boost than the others DLSS settings.

Here is the No Man’s Sky Frametime plot.

Here are the FCAT-VR details of our comparative runs.

The RTX 3070 Ti produced 85.37 unconstrained FPS with no dropped frames or Warp misses, but it required 3200 (50%) synthetic frames.

The Low Preset may be better suited for play with the RTX 3070 Ti, or else individual setting may be lowered to maintain a balance of performance to visuals. However, it may be best to use DLSS Performance instead and accept a slight artifacting. We were very impressed with the Enhanced preset using DLSS Quality, and the high resolution screen of the Vive Pro 2 makes playing this game an even more extraordinary experience where the game comes more alive.

Let’s continue with another demanding VR game, Project CARS 2, that we still like better than its successor.

Project CARS 2 (PC2)

There is a real sense of immersion that comes from playing Project CARS 2 in VR using a wheel and pedals. It uses its in-house Madness engine, and the physics implementation is outstanding. We are disappointed with Project CARS 3, and will continue to use the older game instead for VR benching.

Project CARS 2 offers many performance options and settings and we prefer playing with SMAA rather than to use MSAA.

Project CARS 2 performance settings

We originally tried maximum settings including for Motion Blur but that wasn’t possible so we set everything to Medium.

Here is the frametime plot.Here are the FCAT-VR details.

The RTX 3070 Ti delivered 77.49 unconstrained FPS with 4802 (50%) synthesized or dropped frames and with no Warp misses.

The experience playing Project CARS 2 on the Medium preset requires that we would recommend lowering individual settings or even lower the resolution a as needed to stay out of reprojection. However, even on Medium, the game looks great using the Vive Pro 2.

Let’s benchmark Skyrim VR.

Skyrim VR

Skyrim VR is an older game that is no longer supported by Bethesda, but fortunately the modding community has adopted it. It is not as demanding as many of the newer VR ports so its performance is still very good on maxed-out settings using its Creation engine.

We benchmarked Skyrim VR using its highest settings, but we did not increase its in game supersampling.

Here are the frametime results.

The RTX 3070 Ti managed 130.68 unconstrained FPS with no dropped frames, no synthetic frames, and no Warp misses.

The RTX 3070 Ti can play Skyrim at its maxed out in-game settings although we did not benchmark in-game Supersampling since we saw reprojecting or synthesized frames. Since there is some performance headroom, it suggests to us that mods may be used with the Vive Pro 2 and a RTX 3070 Ti class of video card.

These benchmarks results bring up more questions than answers that we hope to cover in a follow up review dedicated to the Vive Pro 2 next week. However, we love the Pro 2 and have ordered our own headset and will keep it for future VR benchmarking.

To see if the RTX 3070 Ti is a good upgrade from the other video cards we test workstation, creative, and GPGPU benchmarks starting with Blender.

Blender 2.92 Benchmark

Blender is a very popular open source 3D content creation suite. It supports every aspect of 3D development with a complete range of tools for professional 3D creation.

We benchmarked three Blender 2.92 benchmarks which measure GPU performance by timing how long it takes to render production files. We tested seven of our comparison cards with both CUDA and Optix running on the GPU instead of using the CPU. We benchmarked the RX 6800 XT and the RTX 3080 using OpenCL because Radeons do not support CUDA.

Here are the RTX 3070 Ti’s CUDA and OPTIX scores.

For the following chart, lower is better as the benchmark renders a scene multiple times and gives the results in minutes and seconds.

Blender’s benchmark performance is slower using the RTX 3070 Ti compared with the RTX 3080 and slightly faster than te RTX 3070.

Next we look at the OctaneBench.

Octane Bench

OctaneBench allows you to benchmark your GPU using OctaneRender. The hardware and software requirements to run OctaneBench are the same as for OctaneRender Standalone.

We run OctaneBench 2020.1.5 for Windows and here are the RTX 3070 Ti’s complete results with an overall score of 454.87.

Here is the summary chart comparing our five GeForce cards. Radeons cannot run the Octane benchmark.

The RTX 3070 Ti is a decent card when used for rendering but closer to the RTX 3070 in performance than the RTX 3080.

Next, we move on to AIDA64 GPGPU benchmarks.

AIDA64 v6.33

AIDA64 is an important industry tool for benchmarkers. Its GPGPU benchmarks measure performance and give scores to compare against other popular video cards.

AIDA64’s benchmark code methods are written in Assembly language, and they are well-optimized for every popular AMD, Intel, NVIDIA and VIA processor by utilizing the appropriate instruction set extensions. We use the Engineer’s full version of AIDA64 courtesy of FinalWire. AIDA64 is free to to try and use for 30 days.

Here are the RTX 3070 Ti AIDA64 GPGPU results.

Here is the chart summary of the AIDA64 GPGPU benchmarks with seven of our competing cards side-by-side.

The RTX 3070 Ti is a fast GPGPU card that is slightly faster than the RTX 3070. So let’s look at Sandra 2020 next.

SiSoft Sandra 2020/21

To see where the CPU, GPU, and motherboard performance results differ, there is no better tool than SiSoft’s Sandra 2020. SiSoftware SANDRA (the System ANalyser, Diagnostic and Reporting Assistant) is a excellent information & diagnostic utility in a complete package. It is able to provide all the information about your hardware, software, and other devices for diagnosis and for benchmarking. Sandra is derived from a Greek name that implies “defender” or “helper”.

There are several versions of Sandra, including a free version of Sandra Lite that anyone can download and use. Sandra 2021 R2 is the latest version, and we are using the full engineer suite courtesy of SiSoft. Sandra 2020/21 features continuous multiple monthly incremental improvements over earlier versions of Sandra. It will benchmark and analyze all of the important PC subsystems and even rank your PC while giving recommendations for improvement.

We ran Sandra’s intensive GPGPU benchmarks and charted the results summarizing them. There was a bug in one Processing benchmark that affected the Red Devil RX 6800 XT with OpenCL that was addressed by SiSoft by the time we tested the RX 6800.

In Sandra GPGPU benchmarks, the RTX 3070 Ti is similar in performance to the RTX 3070. Interestingly, the RTX 3070 Ti (and RTX 3080 Ti’s) Hashing bandwidth is much lower than the RTX 3080/RTX 3070 and even the RX 6800 XT as NVIDIA has limited its cryptocurrency mining ability. However, since the architectures are different, each card exhibits different characteristics with different strengths and weaknesses.

SPECworkstation3 Benchmarks

All the SPECworkstation3 benchmarks are based on professional applications, most of which are in the CAD/CAM or media and entertainment fields. All of these benchmarks are free except for vendors of computer-related products and/or services.

The most comprehensive workstation benchmark is SPECworkstation3. It’s a free-standing benchmark which does not require ancillary software. It measures GPU, CPU, storage and all other major aspects of workstation performance based on actual applications and representative workloads. We only tested the GPU-related workstation performance as checked in the image above.

Here are our raw SPECworkstation 3.0.4.summary and raw scores for the RTX 3070 Ti.

Here are the SPECworkstation3 results summarized in a chart along with six competing cards. Higher is better.

Using SPEC benchmarks, the RTX 3070 Ti is closer in performance to the RTX 3070 than it is to the RTX 3080. However, since the architectures are different, the cards each exhibit different characteristics with different strengths and weaknesses.

SPECviewperf 2020 GPU Benches

The SPEC Graphics Performance Characterization Group (SPECgpc) has released a 2020 version of its SPECviewperf benchmark that features updated viewsets, new models, support for both 2K and 4K display resolutions, and improved set-up and results management.

We benchmarked at 4K and here is the summary for the RTX 3070 Ti.

Here are SPECviewperf 2020 GPU benchmarks summarized in a chart together with six other cards.

Again the RTX 3070 Ti is slightly faster than the RTX 3070 but not close to RTX 3080 performance.

After seeing these benches, some creative users may wish to upgrade their existing systems with a new RTX 30X0 series card based on the performance increases and the associated increases in productivity that they require. The question to buy an RTX 3070 Ti should be based on the workflow and requirements of each user as well as their budget. Time is money depending on how these apps are used. However, the target demographic for the RTX 3070 Ti is primarily gaming for gamers, especially at 1440P and at 1080P.

Let’s head to our conclusion.

Final Thoughts

The $599 RTX 3070 Ti FE performed well performance-wise compared to the RX 6800. However at only around 3-10% faster than the $100 less expensive RTX 3070 it is not priced particularly well based on its value to performance. It does have faster GDDR6X memory, slightly more cores and a mini-clockspeed bump together with a much better cooling system

If a gaming enthusiast wants a very fast card upper-midrange card, the RTX 3070 Ti is an excellent card for ultra 1080P or 1440P gaming. It can also be used for 4K gaming if settings are lowered.

The Founders Edition of the RTX 3070 Ti is well-built, solid, and good-looking, and it stays cool and quiet even when overclocked. The RTX 3070 Ti Founders Edition will offer a solid upgrade for first generation Turing owners of the RTX 2070 or any earlier generation cards. However, it is not really an upgrade from a $499 RTX 3070 FE which has a higher value to price ratio – if it can be found at MSRP.

Pros

  • The RTX 3070 Ti is fast enough for VR gaming with the Vive Pro 2 at 100% SteamVR render resolution
  • The RTX 3070 Ti is perfect for 1440P or 1080P gaming although settings have to be lowered for 4K; and it’s also very useful for intensive creative, SPEC, or GPGPU apps
  • Ray tracing is a game changer in every way and the RTX 3070 Ti is much faster than the RX 6800 XT or RX 6800 XT when DLSS 2.0 or ray tracing features are enabled. DLSS 2.0 has been rightly called “a miracle” for gamers including for VR gamers
  • Reflex and Broadcast are important features for competitive gamers and broadcasters
  • Ampere improves over Turing with AI/deep learning and ray tracing to improve visuals while also increasing performance with DLSS 2.0 and Ultra Performance DLSS
  • The RTX 3070 Ti Founders Edition design cooling is quiet and efficient and its upgraded flow-through design is a real upgrade over the RTX 3070 FE. The GPU in a well-ventilated case stays cool even when overclocked and it remains quiet using the stock fan profile
  • The industrial design is eye-catching and it is solidly built

Cons

  • High Price
  • Lack of availability

The Verdict

If you are a gamer who plays at maxed-out 1080P, 1440P, or even at 4K with lesser settings, you may want to upgrade to a RTX 3070 Ti. The Founders Edition offers good performance value as an upgrade from previous generations with the additional benefit of being able to handle ray tracing much better. It is much faster in ray traced games than any Radeon, and DLSS 2.0 is a true game changer that brings extra performance without any compromise in visuals.

The RTX 3070 Ti Founders Edition is available starting tomorrow for $699 from NVIDIA’s online store, and USA customers can purchase these cards also directly from Best Buy both online and in person. Only a relatively few lucky gamers will be able to buy one at SEP, but we believe the supply issue will ease and pricing will return to normal by the Autumn and this review will be even more useful in making a high end card selection then.

Stay tuned, there is a lot more on the way from BTR. Next week, we will test multiple cards in VR using the brand new Vive Pro 2. We are in touch with HTC/Vive and hope to have answers and solid performance results by then. Stay tuned to BTR!

Happy Gaming!

]]>
https://babeltechreviews.com/the-rtx-3070-ti-launch-review-featuring-the-vive-pro-2/feed/ 1
The Red Devil & Reference RX 6700 XT take on the RTX 3070 & RTX 3060 Ti in 35 Games https://babeltechreviews.com/the-red-devil-rx-6700-xt-review-35-games/ https://babeltechreviews.com/the-red-devil-rx-6700-xt-review-35-games/#comments Wed, 17 Mar 2021 08:26:34 +0000 /?p=22369 Read more]]> The PowerColor Red Devil RX 6700 XT takes on the Reference RX 6700 XT & the RTX 3070 & RTX 3060 Ti in 35 Games

The Red Devil RX 6700 XT arrived at BTR for evaluation from PowerColor as a 12GB vRAM-equipped card last week with no manufacturer recommended (SEP/MSRP) pricing. We have been comparing it with the just released $479 RX 6700 XT reference card from AMD, and also versus the GeForce $499 RTX 3070 Founders Edition (FE) and the $399 RTX 3060 Ti (FE) using 35 games, GPGPU, workstation, SPEC, and synthetic benchmarks.

We will also compare the performance of these competing cards with the RX 6700 XT’s bigger brother, the RX 6800; with its predecessor the RX 5700 XT Anniversary Edition (AE); and also with the $329 RTX 3060; but especially versus the RTX 2060 and the GTX 1060/6GB to see how older cards fare to complete BTR’s 9-card Big Picture.

The Red Devil RX 6700 XT is factory clocked higher than the reference version (below) using its OC BIOS.

According to its specifications, the Red Devil RX 6700 XT can boost up to 2622MHz out of the box or 41MHz higher than the reference RX 6700 XT which clocks to 2581MHz. It also looks different from older generation classic Red Devils, arriving in a more neutral gray color instead of in all red and black. The Red Devil RX 6700 XT features a RGB mode whose LEDs default to a bright red which may be customized by PowerColor’s DevilZone software.

The Reference and Red Devil RX 6700 XT Features & Specifications

First let’s look at the reference RX 6700 XT specifications compared with its predecessor, the RX 5700 XT

Source: AMD

From what we can see from the specifications, the new card should be solidly faster than its predecessor.

Here are the Red Devil RX 6700 XT specifications according to PowerColor:

Specifications

Source: PowerColor

Features

Here are the Red Devil RX 6700 XT features.

Source: PowerColor

Additional Information from PowerColor

  • The card has 2 modes, OC and Silent 203W / 186W Power target. There’s a bios switch on the side of the card. We designed this card to be very quiet, even on performance mode is considerably quieter than reference board but we also advise to try the silent mode as it’s truly whisper quiet, with a normal case with a optimal airflow, you most likely see the card run around 1000 Rpms under this mode.
  • The board has 12 Phases (10+2 Dr.Mos) VS the 9 (7+2) Phase VRM design on the reference design meaning is over spec’d in order to
    deliver the best stability and overclock headroom, not only capable of well over 250w but by having such VRM it will run cooler and last longer.
  • DrMos and high-polymer Caps are used on our Design, no compromises.
  • Our cooler features 2 x 100mm with a center 1x90mm fan, all with two ball bearing fans with 6 heat pipes 6Φ across the high density heatsink with coper base. As you might notice the PCB is shorter than the cooler, this design is a continuation of what we already implemented in many generations previously and just now has become almost a industry standard.
  • RGB is enhanced, Red Devil now connects to the motherboard aRGB (5v 3 pin connector).
  • Red Devil has Mute fan technology, fans stop under 60c!
  • The ports are LED illuminated. Now you can see in the dark where to plug.
  • The card back plate does not have thermal pads but instead we did cuts across the backplate for the PCB to breath, which under high heat scenarios is more beneficial than having thermal pads as the back plate can become a heat trap.
  • Copper Base Direct Touch – A smooth copper base with direct contact to the GPU and VRAMs provides for optimized heat transfer and dissipation
  • Buyers or Red Devil Limited edition will be able to join exclusive giveaway as well access to the Devil Club website. A membership club for Devil users only which gives them access to News, Competitions, Downloads, and most important, instant support via Live chat.

The Big Navi 2 Radeon 6000 family

The reference Radeon 6700 XT at $479 competes with the RTX 3070 FE ($499) and is priced $20 lower, but it sits $80 higher than the RTX 3060 Ti ($399). This should tell us that it is expected to trade blows with the RTX 3070, but be solidly faster then the RTX 3060 Ti.

The RX 6800 at $579 competes below the RX 3080 at $699 while the RX 6900 XT at $999 is AMD’s flagship and sits below the $1499 RX 3090. Of course, as PowerColor would have us understand, none of these “suggested” prices have any meaning to gamers currently because of the supply issues and extreme demand caused by the dual pandemics – COVID 19 and cryptocurrency mining.

Source; AMD

Above is a die shot of the GPU powering the Radeon RX 6700 XT courtesy of AMD.

Source: AMD

AMD has their own ecosystem for gamers and many unique new features for the Radeon 6000 series. However, the above slide from AMD does not mention two very important features – the Infinity Cache and Smart Access Memory.

Infinity Cache & Smart Access Memory

AMD’s RDNA 2 architecture includes the Infinity Cache which alters the way data is delivered to GPUs. This global cache allows fast data
access and increases bandwidth with higher performance and better power efficiency. This highly optimized on-die cache uses 96MB of AMD Infinity Cache delivering up to 2.5x the effective bandwidth compared to 256-bit 12Gbps GDDR6.

Unfortunately, BTR uses Intel’s latest 10th generation flagship CPU, the i9-10990K which does not have this cache available so our results will probably be lower than what a gamer using the Ryzen 5000 platform will see. In addition, we don’t have Smart Access Memory.

AMD’s Smart Access Memory is a new feature for the Radeon RX 6000 Series graphics cards that enables additional memory space to be mapped to the base address register resulting in performance gains for select games when paired with an AMD Ryzen 5000 Series processor or with some Ryzen 3000 series CPUs. Using PCIe, the Base Address Register (BAR) defines how much GPU memory space can be mapped. Until now, CPUs can only access a fraction of GPU memory, often limited to 256MB. With less efficient data transfer, performance is restricted.

NVIDIA has worked with its partners and with Intel to enable Resizable BAR which currently is enabled on the EVGA Z490 FTW motherboard but only works with selected games and with the RTX 3060 for now. When we tried to enable it for the RX 6700 XT, our PC refused to boot. So we had to disable it and test all of our video cards and games without Resizable BAR limiting the RX 6700 XT’s performance.

Last Friday, AMD explained that we would have to do a clean installation of Windows 10 if we wanted to use it, but we simply had no time. Here are their instructions for enabling Resizable BAR for our Intel Z490 motherboard that we shall follow for our future reviews:

If you would like to enable Resizable BAR, we recommend re-installing Windows 10 with these steps:

  • Open the BOOT menu and select CSM (Compatibility Support Module)
  • Set this value to DISABLED
  • Open the ADVANCED menu and choose PCI Configuration
  • Set Above 4G Decoding to Enabled
  • Set Re-Size BAR Support to Enabled
  • Save your changes and install Windows 10

So our performance results may be lower for selected games that can take full advantage of Resizable BAR or Smart Access Memory. Hopefully we will upgrade to a Ryzen 5950X when they become available at a reasonable price and we will retire our i9-10900K. We already are very unhappy with being limited to PCIe Generation 3 using fast SSDs just because Intel chose to hold back the feature until their upcoming 11th generation.

The Test Bed

BTR’s test bed consists of 35 games and 3 synthetic game benchmarks at 1920×1080 and 2560×1440, as well as SPEC, Workstation, and GPGPU benchmarks. Our latest games include Hitman 3, Cyberpunk 2077, DiRT 5, and Godfall. The testing platform uses a recent installation of Windows 10 64-bit Pro Edition, and our CPU is an i9-10900K which turbos all 10 cores to 5.1/5.0GHz, an EVGA Z490 FTW motherboard, and 32GB of T-FORCE Dark Z DDR4 at 3600MHz. The games, settings, and hardware are identical except for the cards being compared.

First, let’s take a closer look at the new PowerColor Red Devil RX 6700 XT which we shall compare with the reference RX 6700 XT.

A Closer Look at the Reference and PowerColor Red Devil RX 6700 XT

Although the Red Devil RX 6700 XT advertises itself as a premium 7nm 16GB vRAM-equipped card on AMD’s RDNA 2 architecture which features 1440P and PCIe 4.0, the cover of the box uses almost no text in favor of stylized imagery.

The back of the box touts key features which now include HDMI 2.1 VRR, ray tracing technology, and VR Ready Premium as well as states it’s 700W power and system requirements. AMD’s technology features are highlighted and the box features PowerColor’s custom cooling solution, Dual-BIOSes, RGB software and output LEDs, and a solid backplate.

Opening its very well-padded box, we see a quick installation guide, RGB LED cable, and an invitation to join PowerColor’s Devil’s Club. PowerColor has a nicer presentation than AMD’s reference RX 6700 XT which is rather barebones.

AMD directs you to their website for installation instructions while PowerColor includes detailed instructions.

The Red Devil RX 6700 XT is a large tri-fan card in a 2.5 slot design which is quite handsome with PowerColor’s colors and even more striking with the RGB on. Here is the Red Devil next to a reference RX 5700 XT and flanked on both sides by a RTX 3060 Ti FE and a RTX 3070 FE to show how much larger and beefier a card it is than the other three cards.

The Red Devil uses two 1×8-pin PCIe connections while the reference version uses 1×8-pin and 1×6-pin. Looking at the other edge, we can see it is all heatsink fins for cooling as is typical of Red Devil cards.

Below, the PowerColor Red Devil RX 6700 XT’s sturdy backplate features a stylized custom devil symbol that lights up in the color of your choice if synced, red being the default. There is also a switch to choose between the default overclock (OC) BIOS and the Silent BIOS. We didn’t bother with the Silent BIOS but it is good to have in case a flash goes bad.

Compare with the reference RX 6700 XT backplate which is rather plain-Jane.

The Red Devil’s RX 6700 XT’s connectors include 2 DisplayPorts, 1 HDMI connection, and a USB Type C connector. There is an LED that illuminates this panel for making easier connections in the dark.

It shares the same IO connectors with the reference RX 6700 XT below, but the Red Devil has a better system exhausting hot air out of the back of the PC.

The specifications look good and the Red Devil itself looks great with its default RGB bright red contrasting with the black backplate and its aggressively lit-up end perhaps is stylistically reminiscent of an automotive grill.

Unlike with the reference version that only lights up the logo, you may also enhance and coordinate the RGB colors by connecting to the motherboard using a supplied aRGB (5v 3-pin) connector using the DevilZone RGB software. It looks awesome.

Let’s check out its performance after we look over our test configuration and more on the next page.

Test Configuration – Hardware

  • Intel Core i9-10900K (HyperThreading/Turbo boost On; All cores overclocked to 5.1GHz/5.0Ghz. Comet Lake DX11 CPU graphics)
  • EVGA Z490 FTW motherboard (Intel Z490 chipset, v1.9 BIOS, PCIe 3.0/3.1/3.2 specification, CrossFire/SLI 8x+8x), supplied by EVGA
  • T-FORCE DARK Z 32GB DDR4 (2x16GB, dual channel at 3600MHz), supplied by Team Group
  • Red Devil RX 6700 XT 12GB, factory settings and overclocked, on loan from PowerColor
  • Radeon RX 6700 XT 12GB, reference version stock clocks and overclocked, on loan from AMD
  • Radeon RX 6800 Reference version 16GB, stock settings, on loan from AMD
  • Radeon RX 5700 XT 8GB Anniversary Edition, stock AE clocks.
  • EVGA RTX 3060 Black 12GB, stock clocks, on loan from NVIDIA
  • RTX 3070 Founders Edition 8GB, stock clocks, on loan from NVIDIA/EVGA
  • RTX 3060 Ti Founders Edition 8GB, stock clocks, on loan from NVIDIA/EVGA
  • RTX 2060 Founders Edition 6GB, stock clocks, on loan from NVIDIA
  • EVGA GTX 1060 SC 6GB, factory SC clocks, on loan from EVGA
  • 2 x 1TB Team Group MP33 NVMe2 PCIe SSD for C: drive; one for AMD and one for NVIDIA
  • 1.92TB San Disk enterprise class SATA III SSD (storage)
  • 2TB Micron 1100 SATA III SSD (storage)
  • 1TB Team Group GX2 SATA III SSD (storage)
  • 500GB T-FORCE Vulcan SSD (storage), supplied by Team Group
  • ANTEC HCG1000 Extreme, 1000W gold power supply unit
  • Samsung G7 Odyssey (LC27G75TQSNXZA) 27″ 2560×1440/240Hz/1ms/G-SYNC/HDR600 monitor
  • DEEPCOOL Castle 360EX AIO 360mm liquid CPU cooler
  • Phanteks Eclipse P400 ATX mid-tower (plus 1 Noctua 140mm fan) – All benchmarking and overclocking performed with the case closed

Test Configuration – Software

  • GeForce 461.72 for the RTX 3070; and GeForce 461.64 drivers for the RTX 3060 and RTX 3060 Ti. GeForce 461.40 drivers are used for the older two GeForce cards.
  • Adrenalin 2021 Edition 20.50.11 press drivers used for the RX 6800, the RX 6700 XT reference and Red Devil editions, and 21.2.3 used for the RX 5700 XT Anniversary Edition (AE).
  • High Quality, prefer maximum performance, single display, set in the NVIDIA control panel; Vsync off.
  • All optimizations are off, Vsync is forced off, Texture filtering is set to High, and Tessellation uses application settings in the AMD control panel.
  • AA enabled as noted in games; all in-game settings are specified with 16xAF always applied
  • Highest quality sound (stereo) used in all games
  • All games have been patched to their latest versions
  • Gaming results show average frame rates in bold including minimum frame rates shown on the chart next to the averages in a smaller italics font where higher is better. Games benched with OCAT show average framerates but the minimums are expressed by frametimes (99th-percentile) in ms where lower numbers are better.
  • Windows 10 64-bit Pro edition; latest updates v10.0.1942. DX11 titles are run under the DX11 render path. DX12 titles are generally run under DX12, and multiple games use the Vulkan API.
  • Latest DirectX

Games

Vulkan

  • DOOM Eternal
  • Red Dead Redemption 2
  • Ghost Recon: Breakpoint
  • Wolfenstein Youngblood
  • World War Z
  • Strange Brigade
  • Rainbow 6 Siege

DX12

  • Hitman 3
  • Cyberpunk 2077
  • DiRT 5
  • Godfall
  • Call of Duty Black Ops: Cold War
  • Assassin’s Creed: Valhalla
  • Watch Dogs: Legion
  • Horizon Zero Dawn
  • Death Stranding
  • F1 2020
  • Gears 5
  • Tom Clancy’s The Division 2
  • Metro Exodus
  • Civilization VI – Gathering Storm Expansion
  • Battlefield V
  • Shadow of the Tomb Raider
  • Project CARS 2
  • Forza 7

DX11

  • Crysis Remastered
  • Mech Warriors 5: Mercenaries
  • Destiny 2 Shadowkeep
  • Borderlands 3
  • Total War: Three Kingdoms
  • Far Cry New Dawn
  • Assetto Corsa: Competitione
  • Monster Hunter: World
  • Overwatch
  • Grand Theft Auto V

Synthetic

  • TimeSpy (DX12)
  • 3DMark FireStrike – Ultra & Extreme
  • Superposition
  • Heaven 4.0 benchmark
  • AIDA64 GPGPU benchmarks
  • Blender 2.912 benchmark
  • Sandra 2020/2021 GPGPU Benchmarks
  • SPECworkstation3
  • SPECviewperf 2020

NVIDIA Control Panel settings

Here are the NVIDIA Control Panel settings.

Next the AMD settings.

AMD Adrenalin Control Center Settings

All AMD settings are set so that all optimizations are off, Vsync is forced off, Texture filtering is set to High, and Tessellation uses application settings. All Navi cards are capable of high Tessellation unlike earlier generations of Radeons.

Anisotropic Filtering is disabled by default but we always use 16X for all game benchmarks.

Let’s check out overclocking, temperatures and noise next.

Overclocking, temperatures and noise

We spent a lot of time overclocking both RX 6700 XTs for this review.

Above is the reference RX 6700 XT Wattman default settings which include the power limit set to default. For the reference card, the performance didn’t matter whether it was set to default or higher and in fact, setting a higher power limit than 5% at our sample’s maximum overclock made it unstable. However, we needed 5% to stabilize the maximum overclock. Reference clocks generally runs from 2544MHz to 2571MHz at stock settings which is right around AMD’s maximum Boost of “up to 2581MHz”.

The Reference RX 6700 XT runs rather warm at stock and the fan speed hovers around 2000rpm to keep the temperatures below 74C and the junction temperatures under 90C under Heaven 4.0’s full load. At 2000rpm the reference RX 6700 XT can barely be heard over our other case fans.

Next we used trial and error to find the maximum performance at the edge of stability by maxing out the memory (107%) and increasing the clocks by 8% as below.

At the very edge of stability, the clocks run from 2748MHz to a peak of 2766MHz, but this time the temperatures rise above 75C with junction temperatures above 90C, and it begins to throttle performance because the fan speed is still low as set by the automatic profile.

So let’s compare with the Red Devil RX 6700 XT.

The Red Devil RX 6700 XT’s clocks are specified to boost “up to 2622MHz” and our sample can run from 2588MHz to 2596MHz under full load, at default. The Red Devil’s temperatures stay low in the mid-60sC with a junction temperature below 85C with the three fans quietly running under 1100rpm even using the OC BIOS. It is quieter than the reference version. So let’s overclock it to the max.

At max overclock, we are still limited to a 7% memory overclock, but we overclocked the core to 9% bringing our clocks to 2790MHz-2800MHz or almost 35MHz higher than the reference core. Now the Red Devil’s three fans speed up peaking below 1790rpm which is still quieter than the dual fans of the reference version running above 2000rpm. At its maximum overclock, the Red Devil remains below 55C and the junction temperature never rises above 75C – so it doesn’t throttle like the reference version and it remains very quiet.

There is a small performance increase from overclocking the RX 6700 XT core by 8% to 9% and increasing the memory by 7%. Unfortunately, AMD has evidently locked all RX 6700 XT cards overclocking down in an attempt to maximize overall performance by limiting the voltage to 1200mV. We would also suggest that the RX 6700 XT is rather voltage constrained and the Red Devil could seriously benefit by more voltage – but not necessarily the reference version. We suspect that many enthusiast gamers will use MPT (More Power Tool) and risk their warranty to gain a substantially higher Red Devil overclock although we cannot recommend it.

We believe that the Red Devil’s overclock will not degrade over time as its PCB components are fit to run all the time at the highest overclock settings – perhaps unlike the reference version, which although it is well-built, it is not over-engineered for ultimate maximum reliability.

Of course, many gamers will want to fine-tune their own overclock and undervolting is a possibility. We have found that Red Devils are generally power-hungry and as the voltage limits are increased using MPT, the Power Limit usually has to increase also. Check the overclocking chart in the next section for performance increases in gaming for both the reference version and the Red Devil RX 6700 XT.

Let’s head to the performance charts to see how the performance of the RX 6700 XTs at reference and at Red Devil clocks compare with 8 other cards.

Performance summary charts

Here are the performance results of 35 games and 3 synthetic tests comparing the factory-clocked 12GB Red Devil and reference RX 6700 XTs with the RTX 3070 FE 8GB and versus the RTX 3060 Ti 8GB plus five other cards all at their factory set clocks. The highest settings are used and are listed on the charts. The benches were run at 1920×1080 and at 2560×1440. Click on each chart to open in a pop-up for best viewing.

Most gaming results show average framerates in bold text, and higher is better. Minimum framerates are next to the averages in italics and in a slightly smaller font. The games benched with OCAT show average framerates but the minimums are expressed by frametimes in ms where lower numbers are better.

The Red Devil RX 6700 XT & the reference RX 6700 XT vs. the RTX 3070 & RTX 3060 Ti FEs

The first set of charts show our four main competing cards. Column one represents the RTX 3070 reference version ($499) performance, column two is the Red Devil RX 6700 XT (no SEP), column three is the RTX 3060 Ti FE ($399), and column four represents the reference RX 6700 XT ($479) performance.

The Red Devil RX 6700 XT is perhaps around 1-2% faster than the reference version and it more-or-less trades blows with the RTX 3070 Founders Edition in some games although the GeForce card is faster overall using our Intel platform.

NVIDIA cards tend to be stronger in DX11, and it appears that Vulcan performance is also strong on the RTX 3070 although one has to go on a game-by-game basis to see which card card is faster in DX12. Since we do not use Resizable BAR or have Smart Access Memory, we expect that some games would shift in favor of the Radeons using a Ryzen 5000 platform.

Let’s see how the reference and Red Devil RX 6700 XTs fit in with our expanded main summary chart, the “Big Picture”, comparing a total of nine cards.

The Big Picture

Next we see the Red Devil RX 6800 XT performance compared with eight other cards on recent drivers.

The RX 6700 XT is in a class above the RX 5700 XT and it clearly outclasses the other two older cards, the RTX 2060 and the GTX 1060.

Next we look at seven ray traced enabled games, each using maximum ray traced settings where available.

Ray Traced Benchmarks

The Red Devil RX 6800 XT is next compared with our other two competing cards when ray tracing is enabled in seven games.Next let’s look at the Big Picture of ray traced benches.

The RX 6700 XTs now appear to perform similarly to the RTX 3060/2060 Super when ultra ray tracing features are enabled in-game. But AMD has no hardware equivalent to NVIDIA’s dedicated AI Tensor cores, so it cannot take advantage of DLSS enabled games which puts its ray tracing performance even further behind.

Although AMD has promised a DLSS equivalent in the future, the RX 6700 XT cannot currently compete with the RTX 3070 or RTX 3060 Ti in our benchmarked ray traced games.

Next we look at overclocked performance.

Overclocked benchmarks

These ten benchmarks were run with both Red Devil RX 6700 XT overclocked as far as they can go while remaining stable as described in the overclocking section. The Red Devil RX 6700 XT factory-clocked card results are presented first and the manually overclocked Red Devil is in the second column. The third column represents manually overclocked reference RX 6700 XT performance results followed by the stock results in the last column.

There is a reasonable performance increase from manually overclocking the Red Devil RX 6700 XT beyond its factory clocks which already give it an approximately 1% performance boost over the reference version. AMD has evidently locked all RX 6700 XT cards overclocking down in an attempt to maximize overall performance, but by virtue of its better cooling, the manually overclocked Red Devil achieves higher performance than the reference version which throttles when it gets too warm.

Let’s look at non-gaming applications next to see if the RX 6700 XT is a good upgrade from the other video cards we test starting with Blender.

Blender 2.912 Benchmark

Blender is a very popular open source 3D content creation suite. It supports every aspect of 3D development with a complete range of tools for professional 3D creation.

We benchmarked three Blender 2.90 benchmarks which measure GPU performance by timing how long it takes to render production files. We tested nine of our comparison cards using OpenCL for the Radeons and CUDA and OPTTIX on GeForce – all running on the GPU instead of using the CPU.

For the following chart, lower is better as the benchmark renders a scene multiple times and gives the results in minutes and seconds.

OpenCL does not appear as well-optimized for Radeons compared with either Optix or CUDA for GeForce.

Next, we move on to AIDA64 GPGPU benchmarks.

AIDA64 v6.32

AIDA64 is an important industry tool for benchmarkers. Its GPGPU benchmarks measure performance and give scores to compare against other popular video cards.

AIDA64’s benchmark code methods are written in Assembly language, and they are well-optimized for every popular AMD, Intel, NVIDIA and VIA processor by utilizing the appropriate instruction set extensions. We use the Engineer’s full version of AIDA64 courtesy of FinalWire. AIDA64 is free to to try and use for 30 days. CPU results are also shown for comparison with both the RTX 3070 and GTX 2080 Ti GPGPU benchmarks.

Here are the Red Devil RX 6700 XT AIDA64 GPGPU results compared with an overclocked i9-10900K.

Here is the chart summary of the AIDA64 GPGPU benchmarks with nine of our competing cards side-by-side.

The RX 6700 XT is a fast GPGPU card and it compares favorably with the Ampere cards, being weaker in some areas and stronger in others. So let’s look at Sandra 2020 next.

SiSoft Sandra 2020

To see where the CPU, GPU, and motherboard performance results differ, there is no better tool than SiSoft’s Sandra 2020. SiSoftware SANDRA (the System ANalyser, Diagnostic and Reporting Assistant) is a excellent information & diagnostic utility in a complete package. It is able to provide all the information about your hardware, software, and other devices for diagnosis and for benchmarking. Sandra is derived from a Greek name that implies “defender” or “helper”.

There are several versions of Sandra, including a free version of Sandra Lite that anyone can download and use. Sandra 2020 R10 is the latest version, and we are using the full engineer suite courtesy of SiSoft. Sandra 2020 features continuous multiple monthly incremental improvements over earlier versions of Sandra. It will benchmark and analyze all of the important PC subsystems and even rank your PC while giving recommendations for improvement.

We ran Sandra’s intensive GPGPU benchmarks and charted the results summarizing them.

In Sandra GPGPU benchmarks, since the architectures are different, each card exhibits different characteristics with different strengths and weaknesses. However, we see very solid improvement of the RX 6700 XT over the RX 5700 XT.

SPECworkstation3 (3.0.4) Benchmarks

All the SPECworkstation3 benchmarks are based on professional applications, most of which are in the CAD/CAM or media and entertainment fields. All of these benchmarks are free except for vendors of computer-related products and/or services.

The most comprehensive workstation benchmark is SPECworkstation3. It’s a free-standing benchmark which does not require ancillary software. It measures GPU, CPU, storage and all other major aspects of workstation performance based on actual applications and representative workloads. We only tested the GPU-related workstation performance as checked in the image above.

Here are our raw SPECworkstation 3.0.4.summary and raw scores for the Red Devil RX 6700XT. Here are the Red Devil SPECworkstation3 results summarized in a chart of our nine competing cards. Higher is better.

Using SPEC benchmarks, since the architectures are different, the cards each exhibit different characteristics with different strengths and weaknesses.

SPECviewperf 2020 GPU Benches

The SPEC Graphics Performance Characterization Group (SPECgpc) has released a new 2020 version of its SPECviewperf benchmark last year that features updated viewsets, new models, support for both 2K and 4K display resolutions, and improved set-up and results management.

We benchmarked at 4K and here are the summary results for the Red Devil RX 6700 XT.

Here are SPECviewperf 2020 GPU reference and Red Devil RX 6700 XT benchmarks summarized in a chart together with eight other cards.

Again we see different architectures with different strengths and weaknesses. The reference version and the Red Devil RX 6700 XT are quite close in performance and they are significantly faster than the RX 5700 XT.

After seeing these benches, some creative users will probably upgrade their existing systems with a new card based on the performance increases and the associated increases in productivity that they require. The question to buy a new video card should be based on the workflow and requirements of each user as well as their budget. Time is money depending on how these apps are used. However, the target demographic for the reference and Red Devil RX 6700 XTs is primarily gaming for gamers.

Let’s head to our conclusion.

Final Thoughts

The Red Devil RX 6700 XT improves significantly over the RX 5700 XT and it trades blows with the RTX 3070 in multiple rasterized games. The reference and Red Devil RX 6700 XT beat the last generation cards including the RX 5700 XT and RTX 2060 although they struggle with ray traced games especially when DLSS is used for the GeForce cards. We somewhat handicapped the RX 6700 XTs by not being able to use Infinity Cache & Smart Access Memory and we expect that performance would be higher if we used a Ryzen 5000 platform.

For Radeon gamers, the reference and Red Devil RX 6700 XTs are a good alternative to GeForce Ampere cards for the majority of modern PC games that use rasterization. The RX 6700 XT offers 12GB of GDDR6 to the 8GB of GDDR6 that the RTX 3070 and RTX 3060 Ti are equipped with. The RTX 3060, although it has 12GB of vRAM, appears to be wasted on that card and it is outclassed by the RX 6700 XT.

At its suggested price of $479, or $20 less than the RTX 3070, the reference RX 6700 XT offers a good value – if it can be found at all. Unfortunately, this launch has proved to be an extremely high demand and limited supply event that will probably be impossible for most gamers wishing to purchase one. This same thing has happened to Ampere cards where the stock is still trickling in and being purchased the instant it’s available.

Prices are ridiculously high and many resellers are taking advantage of this demand situation by raising prices significantly because they realize that ETH (Ethereum) cryptocurrency mining will go bust relatively soon (probably, if we are allowed to speculate compared with what happened in 2017) as mining difficulty continues to rise and Summer’s cooling costs will have miners scrambling to sell their used cards.

ETH prices are starting to show drastic price swings as those at the top are working to prop it up. What goes boom, also goes bust, and (relatively) soon we will see the used market flooded with cheap mining cards that will suddenly ease availability and return video card pricing to a buyer’s market – so be patient please.

PowerColor hasn’t set any pricing on the Red Devil RX 6700 XT allowing the resellers to set theirs. They claim that their margins are actually below their usual historical low double-digit (10-12%) for a new product. Unfortunately, it’s hard to recommend any card with no suggested price even though it is overclocked, very nicely equipped, and well-built over a well-designed reference version for $479. We wish that we could say that “PowerColor thinks their Red Devil is worth $100 more than the reference version” – and we would agree. But now there is no pricing frame of reference whatsoever.

We recommend the Red Devil RX 6700 XT as a great choice out of multiple good choices, especially if you are looking for good looks with RGB, an exceptional cooler, and great performance for 2560×1440, PowerColor’s excellent support, and overall good value assuming that the stock and price stabilizes. We are convinced that PowerColor is an outstanding AMD AIB, and we never hesitate to recommend it to our friends. When we have a choice, we pick and have picked PowerColor video cards for our own purchases.

Let’s sum it up:

The Red Devil RX 6700 XT Pros

  • The PowerColor Red Devil RX 6700 XT is much faster than the last generation RX 5700 XT by virtue of new RDNA 2 architecture. It beats the RTX 2060 and the RTX 3060 as it trades blows with the RTX 3070 in some raster games.
  • 12GB vRAM may make the RX 6700 XT more useful for future gaming than the 8GB vRAM the RTX 3070 or RTX 3060 Ti are equipped with
  • The Red Devil RX 6700 XT has excellent cooling with less noise than the reference version – plus it does not throttle from any thermals
  • The Red Devil RX 6700 XT has a very good power delivery system and 3-fan custom cooling design that is very quiet when overclocked even using the OC mode
  • Dual-BIOS give the user a choice of quiet with less overclocking, or a bit louder with more power-unlimited and higher overclocks. It’s also a great safety feature if a BIOS flash goes bad
  • FreeSync2 HDR eliminates tearing and stuttering
  • Infinity Cache & Smart Access Memory give higher performance with the Ryzen 5000 series
  • Customizable RGB lighting and a neutral color allow the Red Devil to fit into any color scheme using the DevilZone software program.

Red Devil RX 6700 XT Cons

  • Pricing. PowerColor has given no suggested price. We simply cannot compare its price with the reference version at $479 during this current dual pandemic situation. Wait for stock and pricing stability after ETH mining crashes – do not buy from scalpers!
  • Weaker ray tracing performance than the RTX 3070 or the RTX 3060 Ti.

Either the reference version or the Red Devil RX 6700 XT are good card choices for those who game at 2560×1440, and they represent good alternatives to the RTX 3070 albeit with weaker ray tracing and VR performance. They are offered especially for those who prefer AMD cards and FreeSync2 enabled displays which are generally less expensive than Gsync displays; and Infinity Cache & Smart Access Memory are a real plus for gamers using the Ryzen 5000 platform. If a gamer is looking for something extra above the reference version, the Red Devil RX 6700 XT is a very well made and handsome RGB customizable card that will overclock better.

The Verdict:

  • PowerColor’s Red Devil RX 6700 XT is a solidly-built good-looking RGB card with higher clocks out of the box than the reference version and it overclocks better. It trades blows with the RTX 3070 in many rasterized games. Although we have no price or availability updates, it is a kick-ass RX 6700 XT. Hopefully there will be some solid supply coming and the market pricing will normalize after the cryptocurrency pandemic ends (relatively soon).

The reference and Red Devil RX 6700 XTs offer good alternatives to the RTX 3070 and the RTX 3060 Ti for solid raster performance in gaming, and it also beats the performance of AMD’s last generation by a good margin.

Stay tuned, there is much more coming from BTR. This weekend we will return to VR with a performance evaluation comparing the Red Devil RX 6700 XT with the RTX 3070 and the RTX 3060 Ti. After that, we have a T-FORCE PCIe Gen 4 x4 SSD review. And stay tuned for Rodrigo’s upcoming 461.92 driver performance analysis!

Happy Gaming!

]]>
https://babeltechreviews.com/the-red-devil-rx-6700-xt-review-35-games/feed/ 1
The Red Devil RX 6800 XT takes on the Reference RX 6800 XT & the RTX 3080 in 37 Games https://babeltechreviews.com/the-red-devil-rx-6800-xt-takes-on-the-reference-rx-6800-xt-the-rtx-3080-in-37-games/ https://babeltechreviews.com/the-red-devil-rx-6800-xt-takes-on-the-reference-rx-6800-xt-the-rtx-3080-in-37-games/#comments Mon, 30 Nov 2020 16:45:21 +0000 /?p=20414 Read more]]> The PowerColor Red Devil RX 6800 XT takes on the Reference RX 6800 XT & the RTX 3080 in 37 Games

The Red Devil RX 6800 XT arrived at BTR for evaluation on a short-term loan from PowerColor on Wednesday, the same day the card launched for sale with very limited supply and with no manufacturer recommended pricing although it has been listed out of stock for $799.99 at Newegg. We have been benching it versus the $699 RTX 3080 Founders Edition (FE) and the $649 RX 6800 XT reference card that we got that same day from AMD using GPGPU, workstation, SPEC, 37 games and synthetic benchmarks. We concluded from our preliminary 9-game PC and PCVR 15-game review relative to the RTX 3080, that the reference RX 6800 XT is probably faster at pancake gaming than at VR gaming.

We will also compare the performance of these competing cards with the RX 5700 XT Anniversary Edition (AE) and the GTX 1080 Ti FE to see how older cards fare, and we also include all of the GeForce Turing Super cards and the Ampere cards to complete BTR’s 10-card Big Picture.

Left to Right: Red Devil RX 6800 XT, Reference RX 6800 XT, RTX 3080 FE

The Red Devil RX 6800 XT is factory clocked 90MHz higher than the reference version using the OC BIOS. According to its specifications, the Red Devil RX 6800 XT boost can clock up to 2340MHz out of the box. It also looks different from older generation classic Red Devils, arriving in a more neutral gray color instead of in all red and black. The Red Devil RX 6800 XT features a RGB mode whose LEDs default to a bright red which may be customized by PowerColor’s DevilZone software.

The Red Devil RX 6800 XT Features & Specifications

Here are the Red Devil RX 6800 XT specifications according to PowerColor:

Specifications

Source: PowerColor

Features

Here are the Red Devil RX 6800 XT features.

Source: PowerColor

Additional Information from PowerColor

  • The card has 2 modes, OC and Silent. 281W / 255W Power target. There’s a bios switch on the side of the card. Even on performance mode it’s considerably quieter than reference board but the silent mode is truly whisper quiet, with a normal case with a optimal airflow, you most likely see the card run around 1000 rpm.
  • The board has 16 Phase VS the 11+2 Phase VRM design on the reference design meaning is over spec’d in order to deliver the best stability and overclock headroom,not only capable of well over 400w but by having such VRM it will run cooler and last longer.
  • DrMos and high-polymer Caps are used with no compromises.
  • The cooler features 2 x 100mm with a center 1x90mm fan, all with two ball bearing fans with 7 heat pipes (3X8Φ and 4X6Φ heatpipes) across a high density heatsink with a copper base. The PCB is shorter than the cooler.
  • RGB is enhanced, Red Devil now connects to the motherboard aRGB (5v 3 pin connector).
  • Red Devil has Mute fan technology, fans stop under 60C.
  • The ports are LED illuminated. Now you can see in the dark where to plug.
  • The card back plate does not have thermal pads but instead there are openings across the backplate for the PCB to breathe.
  • Red Devil RX 6800 XT Graphics Card Limited Edition provides the unique and high quality crafted Red Devil keycap to make your keyboard look Devilish.
  • Buyers or Red Devil Limited edition will be able to join exclusive giveaway as well access to the Devil Club website. A membership club for Devil users only which gives them access to News, Competitions, Downloads and most important instant support via Live chat.

The Big Navi 2 Radeon 6000 family

The Radeon 6800 competes with the RTX 3070 and is priced a little higher at $579 while the RTX 6800 XT at $649 competes with the RTX 3080 at $699. Next week, the RTX 6900 XT releases at $999 to compete with the $1499 RTX 3090.

Here is a die shot of the GPU powering the Radeon 6000 series courtesy of AMD

AMD has their own ecosystem for gamers and many unique new features for the Radeon 6000 series.

The Test Bed

BTR’s test bed consists of 37 games and 3 synthetic game benchmarks at 1920×1080, 2560×1440, and at 3840×2160 as well as SPEC, Workstation, and GPGPU benchmarks. Our latest games include Watch Dogs: Legions, Call of Duty Black Ops: Cold War and Assassin’s Creed: Valhalla. The testing platform uses a clean installation of Windows 10 64-bit Pro Edition, and our CPU is an i9-10900K which turbos all 10 cores to 5.1/5.0GHz, an EVGA Z490 FTW motherboard, and 32GB of T-FORCE Dark Z DDR4 3600MHz. The games, settings, and hardware are identical except for the cards being compared.

First, let’s take a closer look at the new PowerColor Red Devil RX 6800 XT.

A Closer Look at the PowerColor Red Devil RX 6800 XT

Although the Red Devil RX 6800 XT advertises itself as a premium 7nm card on AMD’s RDNA 2 architecture which features FidelityFX, FreeSync 2 HDR and PCIe 4.0, the cover of the box uses almost no text in favor of stylized imagery.

The back of the box touts key features which now include HDMI 2.1 VRR, ray tracing technology, and VR Ready Premium as well as states it’s 850W power and system requirements although there are a lot of blank and unused space on the box. AMD’s technology features are highlighted, but the box does not even mention PowerColor’s custom cooling solution, Dual-BIOSes, RGB software and output LEDs and backplate.

Opening its very well-padded box, we now see advertising that instead probably should have been on the box’s outside. Also inside are a quick installation guide, RGB LED cable, and an invitation to join PowerColor’s Devil’s Club. In addition, a couple of key caps are included which could prove useful for benchmarking while wearing a HMD. PowerColor’s is a nicer presentation than AMD’s reference RX 6800 XT.

The Red Devil RX 6800 XT is a large tri-fan card in a three slot design which is quite handsome with PowerColor’s colors and even more striking with the RGB on. Here is the Red Devil next to a RTX 3080 FE to show how much larger and beefier a card it is.

It uses two 1×8-pin PCIe connections. Above is the reference RX 6800 XT backplate.

The PowerColor Red Devil RX 6800 XT’s sturdy backplate features a stylized custom devil symbol that lights up in the color of your choice if synced, red being the default. This card is number 41 out of a 1000 limited edition set. We do not know what this means. There is also a switch to choose between the default overclock (OC) BIOS and the Silent BIOS. We didn’t bother with the Silent BIOS but it is good to have in case a flash goes bad.

The Red Devil’s RX 6800 XT’s connectors include 2 DisplayPorts, 1 HDMI connection, and a USB Type C connector. There is an LED that illuminates this panel for making easier connections in the dark.

The specifications look good and the card itself looks great with its default RGB bright red contrasting with the black backplate and its aggressively lit-up end perhaps stylistically reminiscent of an automotive grill.

Let’s check out its performance after we look over our test configuration and more on the next page.

Test Configuration – Hardware

  • Intel Core i9-10900K (HyperThreading/Turbo boost On; All cores overclocked to 5.1GHz/5.0Ghz. Comet Lake DX11 CPU graphics)
  • EVGA Z490 FTW motherboard (Intel Z490 chipset, v1.9 BIOS, PCIe 3.0/3.1/3.2 specification, CrossFire/SLI 8x+8x), supplied by EVGA
  • T-FORCE DARK Z 32GB DDR4 (2x16GB, dual channel at 3600MHz), supplied by Team Group
  • Radeon RX 6800 XT Reference version 16GB, stock settings, on loan from AMD
  • Red Devil RX 6800 XT 16GB, stock and overclocked, on short term loan from PowerColor
  • RTX 3080 Founders Edition 10GB, stock, on loan from NVIDIA
  • Radeon RX 5700 XT 8GB Anniversary Edition, stock AE clocks.
  • RTX 3090 Founders Edition 24GB, stock clocks, on loan from NVIDIA
  • RTX 3070 Founders Edition 8GB, stock clocks, on loan from NVIDIA
  • RTX 2080 Ti Founders Edition 11GB, stock clocks, on loan from NVIDIA
  • RTX 2080 SUPER Founders Edition 8GB, stock clocks, on loan from NVIDIA
  • RTX 2070 Ti Founders Edition 8GB, stock clocks, on loan from NVIDIA
  • GTX 1080 Ti Founders Edition 11GB, stock clocks, on loan from NVIDIA
  • 1TB Team Group MP33 NVMe2 PCIe SSD for C: drive
  • 1.92TB San Disk enterprise class SATA III SSD (storage)
  • 2TB Micron 1100 SATA III SSD (storage)
  • 1TB Team Group GX2 SATA III SSD (storage)
  • 500GB T-FORCE Vulcan SSD (storage), supplied by Team Group
  • ANTEC HCG1000 Extreme, 1000W gold power supply unit
  • BenQ EW3270U 32″ 4K HDR 60Hz FreeSync monitor
  • Samsung G7 Odyssey (LC27G75TQSNXZA) 27″ 2560×1440/240Hz/1ms/G-SYNC/HDR600 monitor
  • DEEPCOOL Castle 360EX AIO 360mm liquid CPU cooler
  • Phanteks Eclipse P400 ATX mid-tower (plus 1 Noctua 140mm fan) – All benchmarking and overclocking performed with the case closed

Test Configuration – Software

  • GeForce 456.96 for the RTX 3070, the RTX 2080 Ti, and the RTX 2070/2080 SUPER; and GeForce 456.16 Press drivers and GeForce 456.38 public drivers (functionally identical) are used for the other GeForce cards. GeForce GRD 457.30 is used for games released in late October and November although otherwise there were no general game performance driver improvements since Ampere launched.
  • Adrenalin 2020 Edition 20.11.2 public launch drivers used for the RX 6800 XT reference and Red Devil editions at their factory clocks and the Red Devil was also overclocked. Adrenalin 2020 Edition 20.10.1 drivers used for the RX 5700 XT Anniversary Edition (AE) at AE clocks.
  • High Quality, prefer maximum performance, single display, set in the NVIDIA control panel.
  • VSync is off in the control panel and disabled for each game
  • AA enabled as noted in games; all in-game settings are specified with 16xAF always applied
  • Highest quality sound (stereo) used in all games
  • All games have been patched to their latest versions
  • Gaming results show average frame rates in bold including minimum frame rates shown on the chart next to the averages in a smaller italics font where higher is better. Games benched with OCAT show average framerates but the minimums are expressed by frametimes (99th-percentile) in ms where lower numbers are better.
  • Windows 10 64-bit Pro edition; latest updates v2004. DX11 titles are run under the DX11 render path. DX12 titles are generally run under DX12, and multiple games use the Vulkan API.
  • Latest DirectX
  • MSI’s Afterburner, 4.6.3 beta to set the RTX 3070’s power and temperature limits to their maximums

Games

Vulkan

  • DOOM Eternal
  • Red Dead Redemption 2
  • Ghost Recon: Breakpoint
  • Wolfenstein Youngblood
  • World War Z
  • Strange Brigade
  • Rainbow 6 Siege

DX12

  • Call of Duty Black Ops: Cold War
  • Assassin’s Creed: Valhalla
  • Watch Dogs: Legion
  • Horizon Zero Dawn
  • Death Stranding
  • F1 2020
  • Mech Warrior 5: Mercenaries
  • Call of Duty Modern Warfare
  • Gears 5
  • Anno 1800
  • Tom Clancy’s The Division 2
  • Metro Exodus
  • Civilization VI – Gathering Storm Expansion
  • Battlefield V
  • Assetto Corsa: Competitione
  • Shadow of the Tomb Raider
  • Project CARS 2
  • Forza 7

DX11

  • Crysis Remastered
  • A Total War Saga: Troy
  • Star Wars: Jedi Fallen Order
  • The Outer Worlds
  • Destiny 2 Shadowkeep
  • Borderlands 3
  • Total War: Three Kingdoms
  • Far Cry New Dawn
  • Assassin’s Creed Odyssey
  • Monster Hunter: World
  • Overwatch
  • Grand Theft Auto V

Synthetic

  • TimeSpy (DX12)
  • 3DMark FireStrike – Ultra & Extreme
  • Superposition
  • Heaven 4.0 benchmark
  • AIDA64 GPGPU benchmarks
  • Blender 2.90 benchmark
  • Sandra 2020 GPGPU Benchmarks
  • SPECworkstation3
  • SPECviewperf 2020

NVIDIA Control Panel settings

Here are the NVIDIA Control Panel settings.

We used MSI’s Afterburner to set all video cards’ power and temperature limits to maximum.

AMD Adrenalin Control Center Settings

All AMD settings are set so that all optimizations are off, Vsync is forced off, Texture filtering is set to High, and Tessellation uses application settings. All Navi cards are capable of high Tessellation unlike earlier generations of Radeons.

Anisotropic Filtering is disabled by default but we always use 16X for all game benchmarks.

Let’s check out overclocking, temperatures and noise next.

Overclocking, temperatures and noise

We couldn’t spend a lot of time overclocking the Red Devil RX 6800 XT for this review but we were able to rough in a decent overclock. We used the OC BIOS for this evaluation.

Above are the PowerColor Red Devil RX 6800 XT Wattman default settings including the the power limit set to default. For this card, the performance didn’t matter whether it was set to default or higher unlike with the reference edition which gained performance as the Power Limit increased especially for overclocking. In fact, setting a higher power limit at our sample’s maximum overclock made it unstable.

The Red Devil RX 6800 XT’s clocks are specified to boost “up to 2340MHz” but our sample can peak well above that under full load, at default. The Red Devil’s temperatures stay low in the mid-70s C with the fans quietly running even using the OC BIOS.

There is a small performance increase from overclocking the RX 6800 XT core by 10% and setting the maximum frequency to 2600MHz. AMD has evidently locked RX 6800 XT cards overclocking down in an attempt to maximize overall performance. We would also suggest that the RX 6800 XT is rather voltage constrained and if you want a higher overclock, pick a factory-overclocked partner version like the Red Devil instead of a reference version. We also set the vRAM to it’s maximum 7% overclock and remained stable for all testing. Check the overclocking chart in the next section for performance increases in gaming.

Let’s head to the performance charts to see how the performance of the RX 6800 XT at reference and at Red Devil clocks compares with 8 other cards.

Performance summary charts

Here are the performance results of 37 games and 3 synthetic tests comparing the Red Devil RX 6800 XT 16GB with the RTX 3080 FE 10GB and versus the reference RTX 6800 XT plus seven other cards all at their factory set clocks. The highest settings are used and are listed on the charts. The benches were run at 1920×1080, 2560×1440, and 3840×2160. Click on each chart to open in a pop-up for best viewing.

Most gaming results show average framerates in bold text, and higher is better. Minimum framerates are next to the averages in italics and in a slightly smaller font. The games benched with OCAT show average framerates but the minimums are expressed by frametimes in ms where lower numbers are better.

The Red Devil RX 6800 XT vs. the reference RX 6800 XT and vs. the RTX 3080 FE

The first set of charts show the 3 main competing cards. Column one represents the RX 6800 XT reference version ($649) performance, column two is the RTX 3080 FE ($699), and column three is the Red Devil RX 6800 XT ($799?). We are especially comparing the wins – denoted by yellow text – between the RX 6800 XT and the RTX 3080. If there is a performance tie, both sets of numbers are given in yellow text. In addition, if there is a further performance improvement with the Red Devil card, the results are given by gold text.

The Red Devil RX 6800 XT is perhaps around 1-2% faster than the reference version and it trades blows with the RTX 3080 Founders Edition. NVIDIA cards tend to be stronger in DX11, and it appears that Vulcan performance is also strong on the RTX 3080 although one has to go on a game-by-game basis to see which card card is faster in DX12.

Let’s see how the Red Devil RX 6800 XT fits in with our expanded main summary chart, the “Big Picture”, comparing a total of ten cards.

The Big Picture

Here we see the Red Devil RX 6800 XT performance compared with nine other cards on recent drivers. This time the Red Devil RX 6800 XT has all of its performance results in yellow text so it stands out.

UPDATED 12/02/20 03:47 AM PT. The figures were mistakenly transposed/inserted for Assetto Corsa Competizione and CoD: Cold War and have been fixed on the charts. Also, Assetto Corsa Competizione is DX11, not DX12.

Next we look at six ray tracing enabled games, each using maximum ray traced settings where available.

Ray Traced Benchmarks

The Red Devil RX 6800 XT is next compared with six cards when ray tracing is enabled in six games.

The RX 6800 XT now appears to perform similar to the RTX 2070/2080 Super class when ray tracing features are enabled in-game. But AMD has no hardware equivalent to NVIDIA’s dedicated AI Tensor cores, so it cannot take advantage of DLSS enabled games which puts its ray tracing performance even further behind. However, although AMD has promised a DLSS equivalent in the future, the RTX 6800 XT cannot currently compete with the RTX 3080 in ray traced games.

Next we look at overclocked performance.

Overclocked benchmarks

These ten benchmarks were run with the Red Devil RX 6800 XT overclocked +10% on the core and +7% on the memory versus at factory clocks. The RX 6800 XT reference card results are presented first and the factory clocked Red Devil RX 6800 XT is in the second column. The third column represents manually overclocked Red Devil performance results followed by the stock RTX 3080 FE results in the last column.

There is a small performance increase from manually overclocking the Red Devil RX 6800 XT beyond its factory clocks which already give it a 1-2% performance boost over the reference version. AMD has evidently locked RX 6800 XT cards overclocking down in an attempt to maximize overall performance. We would also suggest that the reference RX 6800 XT is rather voltage constrained and if you want a higher overclock, pick a factory-overclocked partner version like the Red Devil instead of a reference version.

Let’s look at non-gaming applications next to see if the RX 6800 XT is a good upgrade from the other video cards we test starting with Blender.

Blender 2.90 Benchmark

Blender is a very popular open source 3D content creation suite. It supports every aspect of 3D development with a complete range of tools for professional 3D creation.

We benchmarked three Blender 2.90 benchmarks which measure GPU performance by timing how long it takes to render production files. We tested seven of our comparison cards with both CUDA and Optix running on the GPU instead of using the CPU. We did not benchmark the RX 5700 XT using OpenCL.

For the following chart, lower is better as the benchmark renders a scene multiple times and gives the results in minutes and seconds.

Blender’s benchmark performance is similar using the RX 6800 XT compared with the RTX 3080. Although the performance results depend on the scene rendered, it appears that the RTX 3080 may be faster.

Next, we move on to AIDA64 GPGPU benchmarks.

AIDA64 v6.25

AIDA64 is an important industry tool for benchmarkers. Its GPGPU benchmarks measure performance and give scores to compare against other popular video cards.

AIDA64’s benchmark code methods are written in Assembly language, and they are well-optimized for every popular AMD, Intel, NVIDIA and VIA processor by utilizing the appropriate instruction set extensions. We use the Engineer’s full version of AIDA64 courtesy of FinalWire. AIDA64 is free to to try and use for 30 days. CPU results are also shown for comparison with both the RTX 3070 and GTX 2080 Ti GPGPU benchmarks.

Here are the Red Devil RX 6800 XT AIDA64 GPGPU results compared with an overclocked i9-10900K.

Here is the chart summary of the AIDA64 GPGPU benchmarks with seven of our competing cards side-by-side.

The RX 6800 XT is a fast GPGPU card and it compares favorably with the Ampere cards being weaker in some areas and stronger in others. So let’s look at Sandra 2020 next.

SiSoft Sandra 2020

To see where the CPU, GPU, and motherboard performance results differ, there is no better tool than SiSoft’s Sandra 2020. SiSoftware SANDRA (the System ANalyser, Diagnostic and Reporting Assistant) is a excellent information & diagnostic utility in a complete package. It is able to provide all the information about your hardware, software, and other devices for diagnosis and for benchmarking. Sandra is derived from a Greek name that implies “defender” or “helper”.

There are several versions of Sandra, including a free version of Sandra Lite that anyone can download and use. Sandra 2020 R10 is the latest version, and we are using the full engineer suite courtesy of SiSoft. Sandra 2020 features continuous multiple monthly incremental improvements over earlier versions of Sandra. It will benchmark and analyze all of the important PC subsystems and even rank your PC while giving recommendations for improvement.

The author of Sandra 2020 informed us that while NVIDIA has sent some optimizations, they are generic for all cards, not Ampere specific. The tensors for FP64 & TF32 have not been enabled in Sandra 2020 so GEMM & convolution running on tensors will get faster using Ampere’s tensor cores. BF16 is supposed to be faster than FP16/half-float, but since precision losses are unknown it has not yet been enabled either. And finally, once the updated CUDA SDK for Ampere gets publicly released, Sandra GPGPU performance should improve also.

With the above in mind, we ran Sandra’s intensive GPGPU benchmarks and charted the results summarizing them.

In Sandra GPGPU benchmarks, since the architectures are different, each card exhibits different characteristics with different strengths and weaknesses. However, we see very solid improvements of the RX 6800 XT over the RX 5700 XT.

SPECworkstation3 (3.0.4) Benchmarks

All the SPECworkstation3 benchmarks are based on professional applications, most of which are in the CAD/CAM or media and entertainment fields. All of these benchmarks are free except for vendors of computer-related products and/or services.

The most comprehensive workstation benchmark is SPECworkstation3. It’s a free-standing benchmark which does not require ancillary software. It measures GPU, CPU, storage and all other major aspects of workstation performance based on actual applications and representative workloads. We only tested the GPU-related workstation performance as checked in the image above.

Here are our raw SPECworkstation 3.0.4.summary and raw scores for the RX 6800 XT.

Here are the Red Devil SPECworkstation3 results summarized in a chart along with 8 competing cards. Higher is better.

Using SPEC benchmarks, since the architectures are different, the cards each exhibit different characteristics with different strengths and weaknesses.

SPECviewperf 2020 GPU Benches

The SPEC Graphics Performance Characterization Group (SPECgpc) has released a new 2020 version of its SPECviewperf benchmark twelve days ago that features updated viewsets, new models, support for both 2K and 4K display resolutions, and improved set-up and results management.

We benchmarked at 4K and here are the summary results for the Red Devil RX 6800 XT.

Here are SPECviewperf 2020 GPU reference and Red Devil RX 6800 XT benchmarks summarized in a chart together with five other cards.

Again we see different architectures with different strengths and weaknesses. The reference version and the Red Devil are quite close in performance.

After seeing these benches, some creative users will probably upgrade their existing systems with a new card based on the performance increases and the associated increases in productivity that they require. The question to buy a new video card should be based on the workflow and requirements of each user as well as their budget. Time is money depending on how these apps are used. However, the target demographic for the reference and Red Devil RX 6800 XTs are primarily gaming for gamers.

Let’s head to our conclusion.

The Conclusion

The Red Devil RX 6800 XT improves significantly over the RX 5700 XT and it trades blows with the RTX 3080 FE in rasterized games. The Red Devil RX 6800 XT beats the last generation cards including the RTX 2080 Ti although it struggles with ray traced games especially when DLSS is used for the GeForce cards. We also note that the reference RX 6800 XT is slower and less smooth for VR gaming than the RTX 3080, but some of this may be attributed to immature drivers.

For Radeon gamers, the reference RX 6800 XT is a very decent alternative to GeForce Ampere cards for the vast majority of modern PC games that use rasterization. The RX 6800 XT offers 16GB of GDDR6 to the 10GB of GDDR6X that the RTX 3080s are equipped with.

At its suggested price of $649, or $50 less than the RTX 3080 FE, the reference RX 6800 XT offers a good value – if it can be found at all. Unfortunately, this launch has proved to be an extremely high demand and limited supply event that has been called a paper launch by many wishing to purchase one. And the same thing has happened to Ampere cards where the stock is still trickling in and being purchased the instant it’s available. So prices are high and many resellers are taking advantage of this demand situation by raising prices significantly.

PowerColor hasn’t set any pricing on the Red Devil RX 6800 XT allowing the resellers to set theirs. They claim that their margins are actually below their usual historical low double-digit (10-12%) for a new product. However, we have seen Newegg set Red Devil pricing at $799 which puts it into competition with the very fastest RTX 3080s. It’s hard to recommend a $800 card even though it is overclocked, very nicely equipped, and well-built over a well-designed reference version for $650 – assuming AMD keeps that pricing and continues to ship reference RX 6800 XTs.

We recommend the Red Devil RX 6800 XT as a great choice out of multiple good choices, especially if you are looking for good looks with RGB, an exceptional cooler, great performance for 2560×1440 or 4K, PowerColor’s excellent support, and overall good value assuming that the stock and price stabilizes.

Let’s sum it up:

The Red Devil RX 6800 XT Pros

  • The PowerColor Red Devil RX 6800 XT is much faster than the last generation RX 5700 series by virtue of new RDNA architecture. It beats the RTX 2080 Ti and the RTX 3070 as it trades blows with the RTX 3080 FE.
  • 16GB vRAM may make the RX 6800 XT more useful for future gaming than the 10GB vRAM the RTX 3080 is equipped with
  • The Red Devil RX 6800 XT has excellent cooling with less noise than the reference version
  • The Red Devil RX 6800 XT has a very good power delivery and 3-fan custom cooling design that is very quiet when overclocked even using the OC mode.
  • Dual-BIOS give the user a choice of quiet with less overclocking, or a bit louder with more power-unlimited and higher overclocks.
  • FreeSync2 HDR eliminates tearing and stuttering.
  • Customizable RGB lighting and a neutral color allow the Red Devil to fit into any color scheme using the DevilZone software program.

Red Devil RX 6800 XT Cons

  • Pricing. PowerColor has given no suggested price and Newegg has it for $799.99. Compared with the reference version at $649, it is too expensive and it costs more than many overclocked aftermarket RTX 3080s. Wait for stock and pricing stability.
  • Impossible to buy at a reasonable price.
  • Weaker ray tracing and VR performance than the RTX 3080. Immature drivers may play a part.

Either the reference version or the Red Devil RX 6800 XT are good card choices for those who game at 2560×1440 or at 4K, and they represent good alternatives to the RTX 3080 albeit with weaker ray tracing and VR performance. It is offered especially for those who prefer AMD cards and FreeSync2 enabled displays which are generally less expensive than GSYNC displays. And if a gamer is looking for something extra above the reference version, the Red Devil RX 6800 XT is a very well made and good looking card that will overclock better.

The Verdict:

  • PowerColor’s Red Devil RX 6800 XT is a solidly-built handsome card with higher clocks out of the box than the reference version. It trades blows with the RTX 3080. Although we have no price or availability updates, it is a kick ass RX 6800 XT. Hopefully there will be some solid supply coming and the market pricing will normalize.

The reference and Red Devil RX 6800 XTs offer good alternatives to the RTX 3080 for solid raster performance in gaming, and it also beats the performance of AMD’s last generation.

Stay tuned, there is much more coming from BTR. This week we will continue with our Ampere vs Big Navi showdown. Immediately, we will return to VR with a performance evaluation using the Vive Pro comparing a brand new unreleased card with the RTX 3070, the RTX 3080, the 6800 XT, and versus the RX 6800.

It you would like to comment, please use the section below.

Happy Gaming!

]]>
https://babeltechreviews.com/the-red-devil-rx-6800-xt-takes-on-the-reference-rx-6800-xt-the-rtx-3080-in-37-games/feed/ 6
The RTX 3070 FE Arrives at $499 to Match the RTX 2080 Ti’s Performance https://babeltechreviews.com/the-rtx-3070-fe-arrives-at-499/ Tue, 27 Oct 2020 09:07:32 +0000 /?p=19555 Read more]]> The RTX 3070 Founders Edition Arrives at $499 – Performance Revealed – 35 Games, SPEC, Workstation, and GPGPU Benchmarked

BTR received the RTX 3070 8GB Founders Edition (FE) from NVIDIA and we have been testing its performance by benchmarking 35 games, and also by overclocking it. NVIDIA claims that at $499 it is as fast as the Turing Flagship, the RTX 2080 Ti, which launched at $999 to $1,199 so we compared their performance. In addition, although the RTX 3070 is a gaming GeForce card, we have added workstation, SPEC, and GPGPU benches.

We have already covered Ampere’s features in depth and we have already reviewed the RTX 3080, the 3070’s $699 faster brother that comes equipped with 10GB of vRAM. This review will consider whether the new RTX 3070 FE at $499 delivers a good value as a compelling upgrade from the Pascal GTX 1080 Ti or even from the Ampere RTX 2070 SUPER, the refresh of the RTX 2070 FE which launched at $699 two years ago.

The RTX 3070 is not based on the GA102 chip like the RTX 3080 and the RTX 3090, but rather it uses a separate smaller GA104 GPU chip. Below is the full-chip diagram.

The RTX 3070 FE uses 64 SMs, 5888 CUDA cores, 184 3rd Generation Tensor and 46 RT cores, along with 184 Texture Units and 96 ROPs. The Boost Clock is 7000MHz, and 8192MB of GDDR6 on a 256-bit memory bus provide 448GB/s bandwidth, all within a 220W total GPU power envelope.

Since we have overclocked the RTX 3070, we will compare its overclocked performance versus stock. We have added Crysis Remastered to our benching suite to see “Can it Run Crysis” at 1080P, 1440P, and at 4K. In addition, we will also post SPECworkstation3 GPU results and a brand new version of the improved SPECviewperf 2020 benchmark which released on October 15.

Besides comparing the RTX 3070 FE’s performance with the RTX 2080 Ti FE, BTR’s test bed includes the fastest Ampere cards – the RTX 3080 FE and the RTX 3090 FE – and the Turing RTX 2080 and the RTX 2070 SUPER FE. We also test NVIDIA’s flagship Pascal cards, the TITAN Xp plus the GTX 1080 Ti FE. This time, we have added AMD’s fastest Navi card, the RX 5700 XT Anniversary Edition to compare its performance also.

We benchmark using Windows 10 64-bit Pro Edition at 1920×1080, 2560×1440, and at 3840×2160 using Intel’s Core i9-10900K at 5.1/5.0 GHz and 32GB of T-FORCE DARK Z 3600MHz DDR4 on a EVGA Z490 FTW motherboard. All games and benchmarks use the latest versions, and we use recent GeForce Game Ready drivers for games.

Let’s first unbox the RTX 3070 Founders Edition before we look at our test configuration

The RTX 3070 Founders Edition Unboxing

The Ampere generation RTX 3070 Founders Edition is a completely redesigned Founders Edition and here is the card, unboxed.

Just like with the RTX 3080 and RTX 3090 Founders Editions, the RTX 3070 comes in a “shoebox” style where the card inside lays flat at a slight incline for display. However, the RTX 3070 box is much smaller and shorter than either of its siblings. The thick padding of the box protects the card.

The system requirements, contents, and warranty information are printed on the bottom of each box. The RTX 3070 requires a 650W power supply unit, and the case must have space for a 242mm (L) x 112mm (W) two-slot card. It easily fits in our Phanteks Eclipse P400 ATX mid-tower.

Inside the box and beneath the card are warnings, a quick start guide and warranty information, plus the 12-pin to PCIe 8-pin dongle that will be required to connect the RTX 3070 to most PSUs.

A completely redesigned shroud creates a new look for the RTX 3070 Founders Edition to provides a premium and solid heavy feel to its industrial design. It is a moderately heavy 2-slot card with dual fans.

Turning the card over, we see a similar unique design of Ampere FEs. This card is designed to keep the GPU cool including by using a short PCB, and inside the card is mostly all heatsink fins.

There is very large surface area for cooling so the heat is readily transferred to the fin stack and the dual fans exhaust the heat out of the back of the case and also from the top of the card into the case’s airflow.

The IO panel has a very large air vent and four connectors. The connectors are similar to the Founders Edition of the RTX 2080 Ti and the RTX 3080, but the VirtualLink connector for VR is no longer offered since HMD manufacturers are not using it. Three DisplayPort 1.4 connectors are included, and the HDMI port has been upgraded from 2.0 to 2.1 allowing for 4K/120Hz over a single HDMI cable.

In our opinion, the RTX 3070 Founders Edition is a good-looking card with a unique industrial style and it looks good in any case. The logo does not light up unfortunately.

Before we look at overclocking, power and noise, let’s check out our test configuration.

Test Configuration

Test Configuration – Hardware

  • Intel Core i9-10900K (HyperThreading/Turbo boost On; All cores overclocked to 5.1GHz/5.0Ghz. Comet Lake DX11 CPU graphics)
  • EVGA Z490 FTW motherboard (Intel Z490 chipset, v1.3 BIOS, PCIe 3.0/3.1/3.2 specification, CrossFire/SLI 8x+8x), supplied by EVGA
  • T-FORCE DARK Z 32GB DDR4 (2x16GB, dual channel at 3600MHz), supplied by Team Group
  • RTX 3070 Founders Edition 8GB, stock and overclocked, on loan from NVIDIA
  • RTX 3090 Founders Edition 24GB, stock clocks, on loan from NVIDIA
  • RTX 3080 Founders Edition 10GB, stock clocks, on loan from NVIDIA
  • RTX 2080 Ti Founders Edition 11GB, stock clocks, on loan from NVIDIA
  • RTX 2080 SUPER Founders Edition 8GB, stock clocks, on loan from NVIDIA
  • RTX 2070 Ti Founders Edition 8GB, stock clocks, on loan from NVIDIA
  • TITAN Xp Star Wars Collectors Edition 12GB, stock clocks, on loan from NVIDIA
  • GTX 1080 Ti Founders Edition 11GB, stock clocks, on loan from NVIDIA
  • Radeon RX 5700 XT 8GB Anniversary Edition, stock AE clocks.
  • 1TB Team Group MP33 NVMe2 PCIe SSD for C: drive
  • 1.92TB San Disk enterprise class SATA III SSD (storage)
  • 2TB Micron 1100 SATA III SSD (storage)
  • 1TB Team Group GX2 SATA III SSD (storage)
  • 500GB T-FORCE Vulcan SSD (storage), supplied by Team Group
  • ANTEC HCG1000 Extreme, 1000W gold power supply unit
  • BenQ EW3270U 32″ 4K HDR 60Hz FreeSync monitor
  • Samsung G7 Odyssey (LC27G75TQSNXZA) 27? 2560×1440/240Hz/1ms/G-SYNC/HDR600 monitor
  • DEEPCOOL Castle 360EX AIO 360mm liquid CPU cooler
  • Phanteks Eclipse P400 ATX mid-tower (plus 1 Noctua 140mm fan) – All benchmarking and overclocking performed with the case closed

Test Configuration – Software

  • GeForce 456.96 for the RTX 3070, the RTX 2080 Ti, and the RTX 2070 SUPER; and GeForce 456.16 Press drivers and GeForce 456.38 public drivers (functionally identical) are used for the other GeForce cards.
  • Adrenalin 2020 Edition 20.10.1 drivers used for the RX 5700 XT Anniversary Edition (AE) at AE clocks.
  • High Quality, prefer maximum performance, single display, set in the NVIDIA control panel.
  • VSync is off in the control panel and disabled for each game
  • AA enabled as noted in games; all in-game settings are specified with 16xAF always applied
  • Highest quality sound (stereo) used in all games
  • All games have been patched to their latest versions
  • Gaming results show average frame rates in bold including minimum frame rates shown on the chart next to the averages in a smaller italics font where higher is better. Games benched with OCAT show average framerates but the minimums are expressed by frametimes (99th-percentile) in ms where lower numbers are better.
  • Windows 10 64-bit Pro edition; latest updates v2004. DX11 titles are run under the DX11 render path. DX12 titles are generally run under DX12, and multiple games use the Vulkan API.
  • Latest DirectX
  • MSI’s Afterburner, 4.6.3 beta to set the RTX 3070’s power and temperature limits to their maximums

Games

Vulkan

  • DOOM Eternal
  • Red Dead Redemption 2
  • Ghost Recon: Breakpoint
  • Wolfenstein Youngblood
  • World War Z
  • Strange Brigade
  • Rainbow 6 Siege

DX12

  • Horizon Zero Dawn
  • Death Stranding
  • F1 2020
  • Mech Warrior 5: Mercenaries
  • Call of Duty Modern Warfare
  • Gears 5
  • Control
  • Anno 1800
  • Tom Clancy’s The Division 2
  • Metro Exodus
  • Civilization VI – Gathering Storm Expansion
  • Battlefield V
  • Shadow of the Tomb Raider
  • Project CARS 2
  • Forza 7

DX11

  • Crysis Remastered
  • A Total War Saga: Troy
  • Star Wars: Jedi Fallen Order
  • The Outer Worlds
  • Destiny 2 Shadowkeep
  • Borderlands 3
  • Total War: Three Kingdoms
  • Far Cry New Dawn
  • Assassin’s Creed Odyssey
  • Monster Hunter: World
  • Overwatch
  • Grand Theft Auto V

Additional Games

  • RTX Quake II
  • Bright Memory Infinite RTX Demo

Synthetic

  • TimeSpy (DX12)
  • 3DMark FireStrike – Ultra & Extreme
  • Superposition
  • Heaven 4.0 benchmark
  • AIDA64 GPGPU benchmarks
  • Blender 2.90 benchmark
  • Sandra 2020 GPGPU Benchmarks
  • SPECworkstation3
  • SPECviewperf 2020
  • Octane benchmark

NVIDIA Control Panel settings

Here are the NVIDIA Control Panel settings.

We used MSI’s Afterburner to set all video cards’ power and temperature limits to maximum as well as for overclocking and to increase the RTX 3070’s voltage to its maximum for additional overclocking.

AMD Adrenalin Control Center Settings

All AMD settings are set so that all optimizations are off, Vsync is forced off, Texture filtering is set to High, and Tessellation uses application settings. Navi cards are quite capable of high Tessellation unlike earlier generations of Radeons.

Anisotropic Filtering is disabled by default but we always use 16X for all game benchmarks.

By setting the Power Limits and Temperature limits to maximum for each card, they do not throttle, but they can each reach and maintain their individual maximum clocks. This is particularly beneficial for high power cards.

Let’s check out overclocking, temperatures and noise next.

Overclocking, Temperatures & Noise

All of our performance and overclocked testing are performed in a closed Phanteks Eclipse P400 ATX mid-tower case. Inside, the RTX 3070 is a very quiet card even when overclocked and we never needed to increase its fan speeds manually or change the stock fan profile. Compared with the RTX 2080 Ti which becomes rather loud when it ramps up, the RTX 3070 is much quieter and can barely be heard over the other fans in our PC. We overclocked the RTX 3070 using Afterburner including adding .1mV more voltage.

We used Heaven 4.0 running in a window at completely maxed-out settings at a windowed 2560×1440 to load the GPU to 98% so we could observe the running characteristics of the RTX 3070 and also to be able to instantly compare our changed clock settings with their results. At completely stock settings with the GPU under full load, the RTX 3070 ran cool and stayed below 75C with clocks that averaged around 1900MHz.

Simply raising the Power and Temperatures to their maximums resulted in the clocks running at 1920MHz to1935MHz with no changes in temperatures whatsoever using the stock fan profile. In fact, we never needed to adjust the fan profile since the GPU never rose above 75C. It never became noisy as the fan never rose above 57%. Adding .1mV to the core didn’t make any difference to stability or to performance.

Our RTX 3070’s vRAM can overclock to 8250MHz using a +1250MHz offset with a decent performance increase, but a core overclock delivers even more significant performance. The maximum stable Heaven overclock allowed us to add +1200MHz offset to the memory and +115MHz to the core, but it was not stable in games.

Our final RTX 3070 overclock turned out to be an “either – or” scenario regarding the memory OC vs the core OC. After testing multiple combinations, our RTX 3070’s final stable overclock to achieve the highest overall performance added +100MHz offset to the core and +100 MHz to the memory as pictured above which settled in above 2000MHz. The RTX 3070 FE is power-limited, and to achieve a higher overclock will take more voltage than what + .1mV can deliver. If you want a higher overclock, pick a partner overclocked AIB.

To see the performance increase from overclocking, we tested only 7 games at all three of our regular resolutions. We live in Southern California and our power company cut the power off yesterday as we were benchmarking – so we had to cut our overclocking results short. The results are given after the main performance charts in the next section.

First, let’s check out performance on the next page.

Performance Summary Charts & Graphs

Gaming Performance Summary Charts

Here are the summary charts of 34 games and 3 synthetic tests. The highest settings were always chosen and the settings are listed on the chart. The benches were run at 1920×1080, 2560×1440 and at 3840×2160. Eight cards were compared and they are listed in order starting with the least two powerful cards on the left to the most powerful on the right: the RX 5700 XT, the GTX 1080 Ti, the RTX 2070 SUPER, the RTX 2080 SUPER, the RTX 3070, the RTX 2080 Ti, the RTX 3080, and the RTX 3090.

Most results, except for synthetic scores, show average framerates, and higher is better. Minimum framerates are next to the averages in italics and in a slightly smaller font. Games benched with OCAT show average framerates, but the minimums are expressed by frametimes (99th-percentile) in ms where lower are better. An “X” means the benchmark was not run (or could not be run).

All of the games that we tested ran well except for A Total War Saga: Troy, and we suspect that it still may be a game or driver issue that has not yet been addressed by NVIDIA’s driver team. Control also had issues with setting the render resolution for 2560×1440 for some cards, and the RX 5700 XT refused to run in DX12 at all. The Shadow of the Tomb Raider benchmark refused to run on the GTX 1080 Ti and the RX 5700 XT, and it would crash to desktop when we attempted to access the benchmark. We note that cards with less than 11GB of vRAM cannot run Ghost Recon: Breakpoint at 4K/Ultimate above a slideshow.

Although there is some variability depending on which games were tested, the RTX 3070 FE generally trades blows with the slightly faster RTX 2080 Ti FE. NVIDIA claims that the RTX 3070 and the RTX 2080 Ti have similar performance, but no doubt they are referring to the entry-level non-overclocked Ti cards – the RTX 2080 Ti Founders Edition (FE) is an overclocked card.

Next we look at ten RTX/DLSS enabled games, each using maximum ray traced settings and the highest quality DLSS where available.

RTX/DLSS Benchmarks

The RTX 3070 is next compared with the same cards when ray tracing or RTX/DLSS are enabled. The RX 5700 XT can only run one ray traced non-RTX game – Crysis Remastered – so we did not include it. However, we included the GTX 1080 Ti results even though it cannot run RTX features above a crawl above 1080P in a very few selected games. We also added RTX Quake II, a fully path-traced game, and the ‘Bright Memory Infinite’ RTX demo benchmark.

The RTX 3070 now appears to be slightly faster than the RTX 2080 Ti when DLSS or RTX features are enabled. NVIDIA is expecting quite a few RTX-enabled games this year and next year, and Watch Dogs: Legion launches in two days with RTX features.

Next we look at overclocked performance.

Overclocked benchmarks

These benchmarks are run with the RTX 3070 overclocked +100MHz on the core and +100MHz on the memory versus at stock clocks. The RTX 3070 stock results are presented first and the overclocked RTX 3070 is in the second column.

There is a small performance increase from overclocking the RTX 3070. NVIDIA has evidently locked Ampere cards overclocking down in an attempt to maximize performance for all Founders Edition gamers. We would also suggest that the RTX 3070 is rather voltage constrained and if you want a higher overclock, pick a factory-overclocked partner version instead of a Founders Edition.

Let’s look at non-gaming applications next to see if the RTX 3070 is a good upgrade from the other video cards we test starting with Blender.

Blender 2.90 Benchmark

Blender is a very popular open source 3D content creation suite. It supports every aspect of 3D development with a complete range of tools for professional 3D creation.

We benchmarked three Blender 2.90 benchmarks which measure GPU performance by timing how long it takes to render production files. We tested seven of our comparison cards with both CUDA and Optix running on the GPU instead of using the CPU. We did not benchmark the RX 5700 XT using OpenCL.

For the following chart, lower is better as the benchmark renders a scene multiple times and gives the results in minutes and seconds.

Blender’s benchmark performance is similar using the RTX 3070 compared with the RTX 2080 Ti.

Next we look at the OctaneBench.

Octane Bench

OctaneBench allows you to benchmark your GPU using OctaneRender. The hardware and software requirements to run OctaneBench are the same as for OctaneRender Standalone.

We run OctaneBench 2020.1.5 for Windows and here are the RTX 3070’s complete results and overall score of 411.73.

We compare with the score and results for the RTX 2080 Ti with 384.24.

Here is the summary chart comparing the TITAN Xp, the RTX 2070 SUPER, the RTX 2080 Ti, the RTX 3070, the RTX 3080, and the RTX 3090.

The RTX 3070 is a very decent card when used for rendering.

Next, we move on to AIDA64 GPGPU benchmarks.

AIDA64 v6.25

AIDA64 is an important industry tool for benchmarkers. Its GPGPU benchmarks measure performance and give scores to compare against other popular video cards.

AIDA64’s benchmark code methods are written in Assembly language, and they are well-optimized for every popular AMD, Intel, NVIDIA and VIA processor by utilizing the appropriate instruction set extensions. We use the Engineer’s full version of AIDA64 courtesy of FinalWire. AIDA64 is free to to try and use for 30 days. CPU results are also shown for comparison with both the RTX 3070 and GTX 2080 Ti GPGPU benchmarks.

Here are the RTX 3070 AIDA64 GPGPU results compared with an overclocked i9-10900K.

Here is the chart summary of the AIDA64 GPGPU benchmarks with seven of our competing cards side-by-side.

The RTX 3070 is a fast GPGPU card and it compares favorably with the RTX 2080 Ti. So let’s look at Sandra 2020 next.

SiSoft Sandra 2020

To see where the CPU, GPU, and motherboard performance results differ, there is no better tool than SiSoft’s Sandra 2020. SiSoftware SANDRA (the System ANalyser, Diagnostic and Reporting Assistant) is a excellent information & diagnostic utility in a complete package. It is able to provide all the information about your hardware, software, and other devices for diagnosis and for benchmarking. Sandra is derived from a Greek name that implies “defender” or “helper”.

There are several versions of Sandra, including a free version of Sandra Lite that anyone can download and use. Sandra 2020 R10 is the latest version, and we are using the full engineer suite courtesy of SiSoft. Sandra 2020 features continuous multiple monthly incremental improvements over earlier versions of Sandra. It will benchmark and analyze all of the important PC subsystems and even rank your PC while giving recommendations for improvement.

The author of Sandra 2020 informed us that while NVIDIA has sent some optimizations, they are generic for all cards, not Ampere specific. The tensors for FP64 & TF32 have not been enabled in Sandra 2020 so GEMM & convolution running on tensors will get faster using Ampere’s tensor cores. BF16 is supposed to be faster than FP16/half-float, but since precision losses are unknown it has not yet been enabled either. And finally, once the updated CUDA SDK for Ampere gets publicly released, Sandra GPGPU performance should improve also.

With the above in mind, we ran Sandra’s intensive GPGPU benchmarks and charted the results summarizing them.

In Sandra GPGPU benchmarks, the RTX 3070 is similar in performance to the RTX 2080 Ti. However, since the architectures are different, each card exhibits different characteristics with different strengths and weaknesses.

SPECworkstation3 (3.0.4) Benchmarks

All the SPECworkstation3 benchmarks are based on professional applications, most of which are in the CAD/CAM or media and entertainment fields. All of these benchmarks are free except for vendors of computer-related products and/or services.

The most comprehensive workstation benchmark is SPECworkstation3. It’s a free-standing benchmark which does not require ancillary software. It measures GPU, CPU, storage and all other major aspects of workstation performance based on actual applications and representative workloads. We only tested the GPU-related workstation performance as checked in the image above.

Here are our raw SPECworkstation 3.0.4.summary and raw scores for the RTX 3070.

Here are the SPECworkstation3 results summarized in a chart along with six competing cards. Higher is better.

Using SPEC benchmarks, the RTX 3070 is similar in performance to the RTX 2080 Ti. However, since the architectures are different, the cards each exhibit different characteristics with different strengths and weaknesses.

SPECviewperf 2020 GPU Benches

The SPEC Graphics Performance Characterization Group (SPECgpc) has released a new 2020 version of its SPECviewperf benchmark twelve days ago that features updated viewsets, new models, support for both 2K and 4K display resolutions, and improved set-up and results management.

We benchmarked at 4K and here are the summary and the raw results for the RTX 3070.

Here are SPECviewperf 2020 GPU benchmarks summarized in a chart together with the three other cards that we had time to test, the RTX 2080 Ti, the RTX 2070 SUPER and the RX 5700 XT.

The RTX 2080 Ti appears to be a little faster than the RTX 3070 in SPECviewperf 2020 GPU benches, but both cards are generally faster than the RX 5700 XT or the RTX 2070 SUPER.

After seeing these benches, some creative users will probably upgrade their existing systems with a new RTX 30X0 series card based on the performance increases and the associated increases in productivity that they require. The question to buy an RTX 3070 should be based on the workflow and requirements of each user as well as their budget. Time is money depending on how these apps are used. However, the target demographic for the RTX 3070 is primarily gaming for gamers, especially at 1440P.

Let’s head to our conclusion.

Final Thoughts

This has been a very enjoyable experience evaluating the new Ampere RTX 3070 versus the seven other cards we tested. The $499 RTX 3070 FE performed very well performance-wise compared to the RTX 2080 Ti FE – formerly the fastest gaming card in the world that released at $1199. The RTX 3070 at $499 is a solid upgrade from the GTX 1080 Ti that originally launched at $699 even though we were originally hesitant to recommend the upgrade to a RTX 2080 Ti two years ago based on its value to performance.

If a gaming enthusiast wants a very fast card that matches the RTX 2080 Ti FE that released just in the past two years, then it is an excellent card for 1440P gaming – and even at 4K with some settings lowered. Unfortunately, we only were able to spend 85 hours with the RTX 3070 and we had to focus on performance, and we simply did not have time to touch on its other features including Reflex and Broadcast.

We plan to follow up with a Reflex Analyzer kit review which features the ASUS ROG Swift 360Hz G-SYNC Gaming Monitor PG259QNR and the ASUS ROG Chakram Core Gaming Mouse. We will compare our Samsung G7 Odyssey 27? 2560×1440/240Hz/1ms/G-SYNC/HDR600 monitor with the new 360Hz/1ms ROG Swift 24″ display, and we now have two Chakram gaming mice (the wired mouse is part of the Reflex Analyzer kit) which have become our go-to gaming mice because of their incredible response and feel. No more Logitech for us!

We are very impressed with the Founders Edition of the RTX 3070 after spending nearly 90 hours testing it over the past 6 days. It offers exceptional performance at 1440P and and it even supports playable gaming at 4K. For $499, the Founders Edition of the RTX 3070 is well-built, solid, and good-looking, and it stays cool and quiet even when overclocked. The RTX 3070 Founders Edition offers a solid value for GTX 1080 Ti or even for RTX 2070 SUPER owners. Gamers using lower performance cards will love the new $499 Ampere card.

Pros

  • The RTX 3070 at $499 is the near-equal of the RTX 2080 Ti which launched at $999 to $1,199 and it is a jump in performance over most older cards such as the RTX 2070 SUPER or the GTX 1080 Ti.
  • The RTX 3070 is perfect for 1440P gaming at maxed out settings and even for 4K gaming with high settings; and it’s also very useful for intensive creative, SPEC, or GPGPU apps
  • Ray tracing is a game changer in every way and the RTX 3070 is generally faster than the RTX 2080 Ti when DLSS 2.0 or RTX features are enabled
  • Reflex and Broadcast are important features for competitive gamers and broadcasters
  • Ampere improves over Turing with AI/deep learning and ray tracing to improve visuals while also increasing performance with DLSS 2.0 and Ultra Performance DLSS
  • The RTX 3070 Founders Edition design cooling is quiet and efficient; the GPU in a well-ventilated case stays cool even when overclocked and it remains quiet using the stock fan profile
  • The industrial design is eye-catching and it is solidly built
  • Price to performance value is solid especially compared with the RTX 2080 Ti

Cons

  • The RTX 3070 is voltage constrained for overclocking, and if you need a high overclock, choose an overclocked partner card.

The Verdict:

If you are a gamer who plays at maxed-out 1440P, you may do yourself a favor by upgrading to a RTX 3070. The RTX 3070 Founders Edition offers good performance value as an upgrade from a GTX 1080 Ti with the additional benefit of being able to handle ray tracing, and it can even meet the demands of 4K gaming with high settings.

The RTX 3070 Founders Edition is available starting today for $499 from NVIDIA’s online store, and USA customers can purchase these cards also directly from Best Buy and Microcenter stores, both online and in person. Canadian customers may purchase RTX 3070 Founders Edition cards from BestBuy.com.

Stay tuned, there is a lot more on the way from BTR. Next up, we will test the RTX 3070 in VR versus the RTX 2080 Ti and the GTX 1080 Ti using the Vive Pro with an ETA of early next week. Stay tuned to BTR!

Happy Gaming!

]]>
The RTX 3090 vs. 2 x RTX 2080 Ti – SLI, mGPU & Pro Apps Benchmarked https://babeltechreviews.com/the-rtx-3090-vs-2-x-rtx-2080-ti-sli-mgpu-pro-apps-benchmarked/ Mon, 12 Oct 2020 00:13:16 +0000 /?p=19351 Read more]]> The RTX 3090 FE vs. RTX 2080 Ti x2 mGPU using SLI, Pro Apps & Workstation and GPGPU benchmarks

This review follows up on the RTX 3090 Founders Edition (FE) launch review. It is the fastest video card in the world, and it is a GeForce optimized for gaming – it is not a TITAN nor a Quadro replacement. However, we demonstrated that it is very fast for SPEC and GPGPU benches, and its huge 24GB vRAM framebuffer allows it to excel in many popular creative apps making it especially fast at rendering.

The RTX 3090 is NVIDIA’s flagship card that commands a premium price of $1500, and some gamers and pro app users may consider buying a second RTX 2080 Ti as an alternative for SLI/mGPU (Multi-GPU) gaming, pro apps, SPEC, GPGPU, and for creative apps. So we purchased a EVGA RTX 2080 Ti XC from eBay and a RTX TITAN NVLink High Bandwidth bridge from Amazon to test 2 x RTX 2080 Tis versus the RTX 3090.

The RTX 3090 is the fastest video card for gaming and it is the first card to be able to run some games at 8K. But the RTX 2080 Ti is still very capable as NVIDIA’s former flagship card, and we will demonstrate how two of them perform in SLI/mGPU games, SPECworkstation3, creative apps using the Blender 2.90 and OTOY benchmarks, and in Sandra 2020 and AIDA64 GPGPU benchmarks. In addition, we will also focus on pro applications like Blender rendering, DaVinci’s Black Magic, and OTOY OctaneRender. It will be interesting to see if two RTX 2080 Ti’s pool their memory to 22GB using these pro apps versus the RTX 3090 24GB.

We benchmark SLI games using Windows 10 64-bit Pro Edition at 2560×1440 and at 3840×2160 using Intel’s Core i9-10900K at 5.1/5.0 GHz and 32GB of T-FORCE DARK Z 3600MHz DDR4. All benchmarks use their latest versions, and we use the same GeForce Game Ready drivers for games and the latest Studio driver for testing pro apps.

Let’s check out our test configuration.

Test Configuration

Test Configuration – Hardware

  • Intel Core i9-10900K (HyperThreading/Turbo boost On; All cores overclocked to 5.1GHz/5.0Ghz. Comet Lake DX11 CPU graphics)
  • EVGA Z490 FTW motherboard (Intel Z490 chipset, v1.3 BIOS, PCIe 3.0/3.1/3.2 specification, CrossFire/SLI 8x+8x), supplied by EVGA
  • T-FORCE DARK Z 32GB DDR4 (2x16GB, dual channel at 3600MHz), supplied by Team Group
  • RTX 3090 Founders Edition 24GB, stock clocks on loan from NVIDIA
  • RTX 2080 Ti Founders Edition 11GB, clocks set to match the EVGA card, on loan from NVIDIA
  • EVGA RTX 2080 Ti Black 11GB, factory clocks
  • 1TB Team Group MP33 NVMe2 PCIe SSD for C: drive
  • 1.92TB San Disk enterprise class SATA III SSD (storage)
  • 2TB Micron 1100 SATA III SSD (storage)
  • 1TB Team Group GX2 SATA III SSD (storage)
  • 500GB T-FORCE Vulcan SSD (storage), supplied by Team Group
  • ANTEC HCG1000 Extreme, 1000W gold power supply unit
  • BenQ EW3270U 32″ 4K HDR 60Hz FreeSync monitor
  • Samsung G7 Odyssey (LC27G75TQSNXZA) 27? 2560×1440/240Hz/1ms/G-SYNC/HDR600 monitor
  • DEEPCOOL Castle 360EX AIO 360mm liquid CPU cooler
  • Phanteks Eclipse P400 ATX mid-tower (plus 1 Noctua 140mm fan) – All benchmarking performed with the case closed

Test Configuration – Software

  • GeForce 456.38 – the last driver to offer a new SLI profile. Game Ready (GRD) drivers are used for gaming and the Studio drivers are used for pro/creative, SPEC, workstation, and GPGPU apps.
  • High Quality, prefer maximum performance, single display, fixed refresh, set in the NVIDIA control panel.
  • VSync is off in the control panel and disabled for each game
  • AA enabled as noted in games; all in-game settings are specified with 16xAF always applied
  • Highest quality sound (stereo) used in all games
  • All games have been patched to their latest versions
  • Gaming results show average frame rates in bold including minimum frame rates shown on the chart next to the averages in a smaller italics font where higher is better. Games benched with OCAT show average framerates but the minimums are expressed by frametimes (99th-percentile) in ms where lower numbers are better.
  • Windows 10 64-bit Pro edition; latest updates v2004.
  • Latest DirectX
  • MSI’s Afterburner, 4.6.3 beta
  • OCAT 1.6

SLI/mGPU Games

  • Strange Brigade
  • Shadow of the Tomb Raider
  • Ashes of the Singularity, Escalation
  • Project CARS 2
  • Star Wars: Jedi Fallen Order
  • The Outer Worlds
  • Destiny 2 Shadowkeep
  • Far Cry New Dawn
  • RTX Quake II

Synthetic

  • TimeSpy (DX12)
  • 3DMark FireStrike – Ultra & Extreme
  • Superposition
  • Heaven 4.0 benchmark
  • AIDA64 GPGPU benchmarks
  • Blender 2.90 benchmark
  • Sandra 2020 GPGPU Benchmarks
  • SPECworkstation3
  • Octane benchmark

Professional Applications

  • Black Magic Design DaVinci Resolve, supplied by NVIDIA
  • Blender 2.90
  • OTOY Octane Render 2020 1.5 Demo – 8K Redcode RAW projects

NVIDIA Control Panel settings

Here are the NVIDIA Control Panel settings.

We used MSI’s Afterburner to set all video cards’ power and temperature limits to maximum as well as to set the Founders Edition clocks to match the XC’s clocks. By setting the Power Limits and Temperature limits to maximum for each card, they do not throttle, but they can each reach and maintain their individual maximum clocks better. When SLI is used, it is set to NVIDIA optimized.

So let’s check out performance on the next page.

SLI & mGPU

BTR has always been interested in SLI, and the last review we posted was in January, 2018 – GTX 1070 Ti SLI with 50 games. We concluded:

“SLI scaling is good performance-wise in mostly older games and where the devs specifically support SLI in newer DX11 and in DX12 games. When GTX 1070 Ti SLI scales well, it easily surpasses a single GTX 1080 Ti or TITAN Xp in performance. . . . SLI scaling in the newest games – and especially with DX12 – is going to depend on the developers’ support for each game [but] recent drivers may break SLI scaling that once worked, and even a new game patch may affect SLI game performance adversely.”

Fast forward 2-1/2 years to today. There are still very few mGPU games, and NVIDIA has relegated SLI to legacy. They will not be adding any new SLI profiles, and the only Ampere card that supports it is the RTX 3090 using a new NVLink bridge – for benchmarking – to set world records in synthetic tests like 3DMark. NVIDIA has this to say about SLI support “transitioning”, quoted in part:

“NVIDIA will no longer be adding new SLI driver profiles on RTX 20 Series and earlier GPUs starting on January 1st, 2021. Instead, we will focus efforts on supporting developers to implement SLI natively inside the games. We believe this will provide the best performance for SLI users.

Existing SLI driver profiles will continue to be tested and maintained for SLI-ready RTX 20 Series and earlier GPUs. For GeForce RTX 3090 and future SLI-capable GPUs, SLI will only be supported when implemented natively within the game.

[Natively supported] DirectX 12 titles include Shadow of the Tomb Raider, Civilization VI, Sniper Elite 4, Gears of War 4, Ashes of the Singularity: Escalation, Strange Brigade, Rise of the Tomb Raider, Zombie Army 4: Dead War, Hitman, Deus Ex: Mankind Divided, Battlefield 1, and Halo Wars 2.

[Natively supported] Vulkan titles include Red Dead Redemption 2, Quake 2 RTX, Ashes of the Singularity: Escalation, Strange Brigade, and Zombie Army 4: Dead War.

… Many creative and other non-gaming applications support multi-GPU performance scaling without the use of SLI driver profiles. These apps will continue to work across all currently supported GPUs as it does today.”

It looks bleak for SLI’s future and dev-supported mGPU titles are beyond rare. So we tested our 40-game benching suite and identified just nine games that scaled well with RTX 2080 Ti. Of the baker’s dozen games that NVIDIA lists and that we have, Civilization VI using the ‘Gathering Storm’ expansion benchmark did not scale, and Red Dead Redemption 2 crashed when we tried to use it. The other games on their list are old and run great on any modern GPU negating any reason to use SLI anyway, except perhaps for extreme supersampling.

We didn’t bother listing the performance of games that barely scale, scale negatively, or exhibit issues when SLI is enabled. That list is very long. Of course, there are still SLI enthusiasts who tweak their games with NVIDIA Inspector and roll back to old drivers to indulge their hobby – but we use the latest drivers without tediously trying workarounds that may or may not be successful.

Build-Gaming-Computers.com was able to identify a total of 57 games (out of many thousands) that scaled well or partially with SLI in 2020, and they have concluded that overall it isn’t worth the trouble or the expense of maintaining two finicky cards using an extra-large PSU. However, here are nine relatively modern games that we tested that show SLI scaling without a lot of microstutter or other issues associated with them.

SLI Gaming Summary Charts

Here are the summary charts of 9 games and 3 synthetic tests that scale with mGPU or SLI. The highest settings are always chosen and the settings are listed on the chart. The benches were run at 2560×1440 and at 3840×2160. The first column represents the performance of a single RTX 2080 Ti, the second represents two RTX 1080 Tis, and the third column gives RTX 3090 results. ‘X’ means the game was not tested.

Most results show average framerates, and higher is better. Minimum framerates are next to the averages in italics and in a slightly smaller font. Destiny 2, benched with OCAT show average framerates, but the minimums are expressed by frametimes (99th-percentile) in ms where lower is better.

Although SLI scaling is good with these nine games at 4K, there are some issues with 256×1440 and framerate caps. We would prefer to play these nine games with a RTX 3090 that has no issues with microstutter. However, synthetic benches look pretty good.

Even though the RTX 2080 Ti has been surpassed by both the RTX 3080 and the RTX 3090, our PC scored in the top 1% of all PCs using Fire Strike Ultra.

If you are a professional overclocker and/or want to set a world record, we would suggest buying two RTX 3090s for that purpose instead of using any two Turing video cards.

We cannot recommend SLI to any gamer unless they have a very large library of old(er) games that they revisit and play regularly and who don’t mind the issues associated with tweaking and maintaining SLI profiles using old drivers. Then there is the added inconvenience of disabling SLI each time most modern games are played. Besides, there are the additional issues of heat and noise coupled with using two powerful cards with a large PSU, not to mention the expense of buying a second card, and the higher cooling power bills associated with using SLI during the warm months of Summer.

So let’s look at Creative applications next to see if 2 x RTX 2080 Tis are a viable option versus the RTX 3090 starting with the Blender benchmark.

Blender 2.90 Benchmark

Blender is a very popular open source 3D content creation suite. It supports every aspect of 3D development with a complete range of tools for professional 3D creation. We will look at Blender rendering later in this review, but here are the official benchmark results.

For the following results, lower is better as the benchmark renders a scene multiple times and gives the results in minutes and seconds. First up, two RTX 2080 Ti’s using the RTX TITAN NVLink bridge with CUDA.

Next we try Optix using the two Tis.

There is no difference with SLI enabled or disabled. Here is the chart comparing the performance of a single RTX 2080 Ti with two versus a RTX 3090.

Performance is worse using the second RTX 2080 Ti as the benchmark is not optimized for a second video card. However, we will try to render a large scene in Blender as we show later.

Next we look at the OctaneBench.

OTOY Octane Bench

OctaneBench allows you to benchmark your GPU using OctaneRender. The hardware and software requirements to run OctaneBench are the same as for OctaneRender Standalone and we shall also use OctaneRender for a specific rendering test later, under “Professional Apps”.

First we run OctaneBench 2020.1 for windows, and here are two NVlinked RTX 2080 Ti’s complete results and overall score of 687.14.

We run OctaneBench 2020.1 again and here are the RTX 3090’s complete results and overall score of 652.30.

We have a win for 2 linked RTX 2080 Ti’s scaling. Here is the summary chart.

Next, we move on to AIDA64 GPGPU synthetic benchmarks that are built to scale with mGPU.

AIDA64 v6.25

AIDA64 is an important industry tool for benchmarkers. Its GPGPU benchmarks measure performance and give scores to compare against other popular video cards.

AIDA64’s benchmark code methods are written in Assembly language, and they are generally optimized for popular AMD, Intel, NVIDIA and VIA processors by utilizing their appropriate instruction set extensions. We use the Engineer’s full version of AIDA64 courtesy of FinalWire. AIDA64 is free to to try and use for 30 days. CPU results are also shown for comparison with the video cards’ GPGPU benchmarks.

First the results with a pair of RTX 2080 Tis.

Now the RTX 3090:

Here is the chart summary of the AIDA64 GPGPU benchmarks with the RTX 2080 Ti, the RTX 3090, and NVLinked RTX 2080 Tis side-by-side.

Again the pair of linked RTX 2080 Tis are faster at almost all of AIDA64’s GPGPU benchmarks including the RTX 3090. So let’s look at Sandra 2020 which which is also optimized for mGPU.

SiSoft Sandra 2020

To see where the CPU, GPU, and motherboard performance results differ, there is no better comprehensive tool than SiSoft’s Sandra 2020. SiSoftware SANDRA (the System ANalyser, Diagnostic and Reporting Assistant) is a excellent information & diagnostic utility in a complete package. It is able to provide all the information about your hardware, software, and other devices for diagnosis and for benchmarking.

There are several versions of Sandra including a free version of Sandra Lite that anyone can download and use. Sandra 2020 R10 is the latest version, and we are using the full engineer suite courtesy of SiSoft. Sandra 2020 features continuous multiple monthly incremental improvements over earlier versions of Sandra. It will benchmark and analyze all of the important PC subsystems and even rank your PC while giving recommendations for improvement.

We ran Sandra’s extensive GPGPU benchmarks and charted the results summarizing them below. The performance results of the RTX 2080 Ti are compared with the performance results of the RTX 3090, and versus the two linked RTX 2080 Tis.

In Sandra synthetic GPGPU benchmarks which are optimized for mGPU, the linked RTX 2080 Tis are faster than the RTX 3090 and they generally scale well over a single Ti. Next we move on to SPECworkstation 3 GPU benchmarks.

SPECworkstation3 (3.0.4) Benchmarks

All the SPECworkstation 3 benchmarks are based on professional applications, most of which are in the CAD/CAM or media and entertainment fields. All of these benchmarks are free except to vendors of computer-related products and/or services.

The most comprehensive workstation benchmark is SPECworkstation 3. It’s a free-standing benchmark which does not require ancillary software. It measures GPU, CPU, storage and all other major aspects of workstation performance based on actual applications and representative workloads. We only tested the GPU-related workstation performance. We did not use SPECviewperf 13 since SPECviewperf 2020 is coming out in mid-October.

Here are the SPECworkstation3 results for two linked RTX 2080 Tis. Higher is better since we are comparing scores.

Here are the SPECworkstation3 GPU benches summarized.

The RTX 3090 was unable to complete two benches, probably because of a conflict with Ampere’s new drivers. But there is no scaling whatsoever, or negative scaling for the NVLinked RTX 2080 Tis. So we questioned the people who are responsible for maintaining the SPECworkstation benchmarks:

Q: I am comparing its SPECworkstation results with 2 X RTX 2080 Ti that are connected using a RTX Titan NVLink HB Bridge. Are using two GPUs in this manner supported by the benchmark?

A: The short answer is “no”, it will not produce the desired scaling effect if you bridge the two cards. The longer answer has more to do with your expectations and that the benchmark does not explicitly do anything to preclude multi-GPU scenarios from improving support, but it does not have any code that explicitly enables it.

The graphics portions of SPECworkstation come from SPECviewperf which, in turn, is based on recordings of real-world applications. The creation of a rendering context to draw 3D scenes is done in a way that tries to very closely mimic the real-world application and thus, if the real-world application would benefit from multiple GPUs, so might the viewsets that comprise the benchmark.

The GPU compute portions of SPECworkstation run on only a single GPU. We are working toward multi-GPU support in the next major version but it’s not in there now.

So mGPU scaling may depends on if a benchmark is optimized for it or not. However, let’s next look at some professional applications where a large memory buffer makes a big performance improvement over having a smaller one.

Creative Applications with Large Memory Workloads

Rendering large models, detailed scenes, and high-resolution textures require powerful GPUs with a lot of vRAM. Render artists using the highest quality renders, require high capacity GPU memory which allows them to create more detailed final frame renders without needing to reduce the quality of their final output, or to split scenes into multiple renders which take a lot of extra time. Until now, no GeForce has been equipped with 24GB of vRAM while the RTX 2080 Ti offers 11GB. Let’s look at three pro apps that can use much more than 11GB and also test render times. First up is OTOY OctaneRender.

OctaneRender

OctaneRender is the world’s first spectrally correct GPU render engine with built-in RTX ray tracing GPU hardware acceleration. The RTX 3090 allows large scenes to fit completely into the 24 GBs of GPU memory so out-of-core rendering is not necessary, providing faster than rendering times using out-of-core data for GPUs with lesser memory capacity. We tested the RTX 3090/24GB against the bridged RTX 2080 Tis.

Following NVIDIA’s very specific instructions, we rendered a very large detailed image. Looking closely, we see that out-of-core data was not needed since the entire render fit into the 24GB vRAM buffer, and the large image provided only took 45 seconds to render.

We tested the NVLinked RTX 2080 Tis, and it took much longer at 2 minutes and 27 seconds because it requires much slower out-of-core memory. The 11GB vRAM of the RTX 2080 Tis are evidently not pooled for this render.

However, a pair of RTX 2080 Tis are faster than a single card and the results are summarized in the chart below.

So for rendering, it appears that two linked RTX 2080 Tis are faster than one in OTOY rendering. Let’s look at Blender next.

Blender

Blender is a popular free open source 3D creation suite that supports modeling, rigging, animation, simulation, rendering, compositing, motion tracking, video editing, and the 2D animation pipeline. NVIDIA’s OptiX accelerated rendering in Blender Cycles are used to accelerate final frame rendering and interactive ray-traced rendering in the viewport to give creators real-time feedback without the need to perform time-consuming test renders. The 24 GB framebuffer on the RTX 3090 allows it to perform final frame together with interactive renders that may fail due to a smaller vRAM framebuffer on the RTX 3080 or issues with linked RTX 2080 Tis.

This large render took 31.24 seconds using the RTX 3090 but it caused an error when we tried fitting the scene into the linked RTX 2080 Ti’s framebuffer and it could not complete the render as shown below.

However, it did render with a single RTX 2080 Ti, and here is the summary chart.

So in regard to mGPU and Blender rendering, it appears that “it depends”.

Finally, we looked at Blackmagic Design DaVinci Resolve and 8K Redcode RAW projects.

Blackmagic Design DaVinci Resolve | 8K Redcode RAW projects

Blackmagic’s DaVinci Resolve combines professional 8K editing, color correction, visual effects and audio post production into one software package. With 8K projects featuring 8K REDCODE Raw (R3D), files will use most of the memory available on a RTX 3080 which result in out of memory errors particularly when intensive effects are added. Indeed, the RTX 3090/24GB was able to perform a very intensive LFB project quickly using an 8K R3D RED CAMERA clip on an 8K timeline with a temporal noise reduction processing effect applied. In contrast, the RTX 3080 and a pair of RTX 2080 Tis just generated error messages which means that we would have to workaround – taking a lot of extra time and effort. There is really no quantitative benchmark here.

Older single cards – the RTX 2080 Ti and the TITAN Xp – can run many of these workloads with various degrees of success without errors, but they are much slower than the RTX 3090.

After seeing the totality of these benches, creative users will probably prefer to upgrade their existing systems with a new RTX 3090 based on the performance increases and the associated increases in productivity that they require. The question to buy the RTX 3090 or a second RTX 2080 Ti should probably be based on the workflow and requirements of each user as well as their budget. Time is money depending on how these apps are used. If a professional needs a lot of framebuffer, the RTX 3090 is a logical choice. Hopefully the benchmarks that we ran may help you decide.

Let’s head to our conclusion.

Conclusion

This has been an enjoyable exploration evaluating the Ampere RTX 3090 versus the a pair of NVLinked RTX 2080 Tis – formerly the fastest gaming card in the world. Overall, the RTX 3090 totally blows away its other competitors and it is much faster at almost everything we threw at it. The RTX 3090 at $1499 is the upgrade from a (formerly) $1199 RTX 2080 Ti since a $699 RTX 3080 gives about 20-25% improvement in 4K gaming. If a gaming enthusiast wants the very fastest card – just as the RTX 2080 Ti was for the past two years, and doesn’t mind the $300 price increase – then the RTX 3090 is the only choice.

Forget RTX 2080 Ti SLI as it is legacy, finicky, and requires workarounds with most games to get it to scale at all. SLI gaming uses too much power, puts out extra heat and noise, and it works mostly with older games – but if you are willing to tweak them and use older drivers and don’t mind some frametime instability it may be an option. Native mGPU is supported by so few game devs that it is almost non-existent.

For pro apps and rendering, using two RTX 2080 Tis with a NVLink high bandwidth bridge is somewhat hit or miss. Some applications support it well while others have issues with it unless they have specific support for it. Some very creative users who are able to do their own programming may be able to work around, but a general creative app user should probably skip adding a second card and use a single more powerful card instead. And if you are looking to set benchmarking world records, pick a pair of RTX 3090s instead and put your system on LN2.

The Verdict:

Skip mGPU unless you are willing to put up with its idiosyncrasies and are very skilled at working around, or if it fits your particular requirements. This is BTR’s last SLI/mGPU review for the foreseeable future. We are going to send our EVGA RTX 2080 Ti XC to Rodrigo for his future driver performance analyses so he can compare the Turing RTX 2080 Ti with the Ampere RTX 3080. He will post a GeForce 456.71 driver analysis using a RTX 3080 soon.

Stay tuned, there is a lot more on the way from BTR. Mario has upgraded his CPU platform from a quad-core i7-6700K to a i9-10850K and will have a Destiny 2 comparison between the two platforms shortly. We will also have a very special review for you soon that we just cannot talk about yet. And Sean is already working on his next VR sim review! Stay tuned to BTR.

Happy Gaming!

]]>
The RTX 3090 Founders Edition Performance Revealed – 35+ Games, SPEC & Workstation & GPGPU Benchmarked https://babeltechreviews.com/the-rtx-3090-founders-edition-performance-revealed-35-games-spec-workstation-gpgpu-benchmarked/ Thu, 24 Sep 2020 09:59:40 +0000 /?p=19051 Read more]]> The RTX 3090 Founders Edition Arrives at $1499 – Ampere Flagship Performance Revealed – 35+ Games, SPEC, Pro App & Workstation & GPGPU Benchmarked

BTR received the RTX 3090 Founders Edition (FE) from NVIDIA last Friday, and we have been testing it by using 35+ games, GPGPU benchmarks, and also by overclocking it. In addition, although the RTX 3090 is not a workstation card, we have added workstation SPEC benches, and we will also focus on big data which may take advantage of the RTX 3090’s huge 24GB vRAM framebuffer by testing selected popular creative apps.

The RTX 3090 is a beast in every way – so much so, that BTR has nicknamed it “The Beast”. It is the fastest video card for gaming – so fast, that we will also test 8K gaming. But gaming is not primarily its purpose – according to NVIDIA. At its heart, it is also a professional app card for creators, and it may even be a ‘bargain’ upgrade from the $2500 RTX TITAN for these purposes.

We have already covered Ampere’s features in depth and we have reviewed the RTX 3080, the 3090’s $699 lesser brother that comes equipped with 10GB of vRAM. This review will focus on RTX 3090 performance as well as to consider whether the new RTX 3090 Founders Edition at $1499 delivers a good value as a compelling upgrade from the RTX 2080 Ti which launched at $1199 two years ago.

Since we overclocked the RTX 3090, we will compare its overclocked performance versus stock with 15 games. We have added Crysis Remastered to our benching suite to see “Can it Run Crysis” at 4K. And for the first time in a BTR review, and with special thanks to Dr. Jon Peddie for giving us a crash course in SPEC benches, we will also post SPECworkstation3 GPU results. In addition, we have added creative results using the Blender 2.90 benchmark and complete Sandra 2020 and AIDA64 GPGPU benchmark results together with a more detailed look at some pro applications like Black Magic’s DaVinci, Blender rendering, and OTOY OctaneRender.

Besides comparing the RTX 3090’s performance with the RTX 3080, BTR’s test bed includes the fastest Turing cards – the RTX 2080 Ti Founders Edition (FE) and the RTX 2080 SUPER FE. We also test NVIDIA’s flagship Pascal card, the TITAN Xp plus the GTX 1080 Ti FE. There is no point in comparing any Radeons as AMD’s fastest card is slower than the slowest card we test, the GTX 1080 Ti.

We benchmark using Windows 10 64-bit Pro Edition at 2560×1440 and at 3840×2160 using Intel’s Core i9-10900K at 5.1/5.0 GHz and 32GB of T-FORCE DARK Z 3600MHz DDR4. We also use DSR to simulate 8K gaming. All games and benchmarks use the latest versions, and we use the latest GeForce Game Ready drivers for games and the latest Studio driver for testing pro apps.

Let’s first unbox the RTX 3090 Founders Edition before we look at our test configuration

The RTX 3090 Founders Edition Unboxing

The Ampere generation RTX 3090 Founders Edition is a completely redesigned Founders Edition and here is the card, unboxed.

Just like with the RTX 3080 Founders Edition, the RTX 3090 comes in a similar “shoebox” style where the card inside lays flat at a slight incline for display. However, the RTX 3090 box is much thicker and longer.

The system requirements, contents, and warranty information are printed on the bottom of each box. The RTX 3090 requires a 750 W power supply unit, and the case must have space for a 313mm (L) x 138mm (W) three-slot card. It barely fits in our Phanteks Eclipse P400 ATX mid-tower. The thick packing of the box protects the card. The interior box was damaged on its bottom by FedEx when they ran something into the exterior shipping box, but the card itself escaped unscathed.

Inside the box and beneath the card are warnings, a quick start guide and warranty information, plus the 12-pin to dual PCIe 8-pin dongle that will be required to connect the RTX 3090 to most PSUs.

A completely redesigned shroud creates a new look for the RTX 3090 Founders Edition to provides a premium and solid heavy feel to its industrial design. It is a very heavy 3-slot card and we use two thumbscrews to lock it down taking care not to damage our PCIe slot.

Turning the card over, we see a unique design of Ampere FEs with a fan also on the other side.

This card is designed to keep the GPU cool, and by shining a light from behind, we can see the card is mostly all heatsink fins.

It appears that dust buildup can be blown out of the cooling fins with compressed air more easily than with former flagship Founders Editions which tend to run hot and then noisy. Both the RTX 3080 and the RTX 3090 are designed to take full advantage of the way most PCs cool, but a hot video card blowing air into the case may increase the case temperature and thus the CPU temperature in small form factor PCs or in low airflow cases.

There is very large surface area for cooling so the heat is readily transferred to the fin stack and the dual fans exhaust the heat out of the back of the case and also from the top of the card into the case’s airflow. This is necessary because the RTX 3090 needs to dissipate 350W.

The IO panel has a very large air vent and four connectors. The connectors are similar to the Founders Edition of the RTX 2080 Ti and the RTX 3080, but the VirtualLink connector for VR is no longer offered since HMD manufacturers are not using it. Three DisplayPort 1.4 connectors are included, and the HDMI port has been upgraded from 2.0 to 2.1 allowing for 4K/120Hz or 8K/60Hz over a single HDMI cable.

In our opinion, the RTX 3090 Founders Edition is a beautiful card with a very unique industrial style, and it absolutely dwarfs the RTX 3080 which by itself is itself an imposing card.

Disassembly appears to be very difficult and should give pause to any enthusiast who may have custom watercooling in mind. In fact, we think that watercooling is a waste for this card as it doesn’t have any thermal issues, but it appears to be limited by its power delivery instead. But before we look at overclocking, power and noise, let’s check out our test configuration.

Test Configuration

Test Configuration – Hardware

  • Intel Core i9-10900K (HyperThreading/Turbo boost On; All cores overclocked to 5.1GHz/5.0Ghz. Comet Lake DX11 CPU graphics)
  • EVGA Z490 FTW motherboard (Intel Z490 chipset, v1.3 BIOS, PCIe 3.0/3.1/3.2 specification, CrossFire/SLI 8x+8x), supplied by EVGA
  • T-FORCE DARK Z 32GB DDR4 (2x16GB, dual channel at 3600MHz), supplied by Team Group
  • RTX 3090 Founders Edition 24GB, stock and overclocked, on loan from NVIDIA
  • RTX 3080 Founders Edition 10GB, stock and overclocked, on loan from NVIDIA
  • RTX 2080 Ti Founders Edition 11GB, stock clocks, on loan from NVIDIA
  • RTX 2080 SUPER Founders Edition 8GB, stock clocks, on loan from NVIDIA
  • TITAN Xp Star Wars Collectors Edition 12GB, stock clocks, on loan from NVIDIA
  • GTX 1080 Ti Founders Edition 11GB, stock clocks, on loan from NVIDIA
  • 1TB Team Group MP33 NVMe2 PCIe SSD for C: drive
  • 1.92TB San Disk enterprise class SATA III SSD (storage)
  • 2TB Micron 1100 SATA III SSD (storage)
  • 1TB Team Group GX2 SATA III SSD (storage)
  • 500GB T-FORCE Vulcan SSD (storage), supplied by Team Group
  • ANTEC HCG1000 Extreme, 1000W gold power supply unit
  • BenQ EW3270U 32″ 4K HDR 60Hz FreeSync monitor
  • Samsung G7 Odyssey (LC27G75TQSNXZA) 27? 2560×1440/240Hz/1ms/G-SYNC/HDR600 monitor
  • DEEPCOOL Castle 360EX AIO 360mm liquid CPU cooler
  • Phanteks Eclipse P400 ATX mid-tower (plus 1 Noctua 140mm fan) – All benchmarking and overclocking performed with the case closed

Test Configuration – Software

  • GeForce 456.16 Press drivers and GeForce 456.38 public drivers (functionally identical). The Game Ready (GRD) drivers are used for gaming and the Studio drivers are used for pro/creative, SPEC, workstation, and GPGPU apps.
  • High Quality, prefer maximum performance, single display, set in the NVIDIA control panel.
  • DSR used in the NVIDIA control panel and with Windows settings to simulate 7860×4320 (from 3840×2160)
  • VSync is off in the control panel and disabled for each game
  • AA enabled as noted in games; all in-game settings are specified with 16xAF always applied
  • Highest quality sound (stereo) used in all games
  • All games have been patched to their latest versions
  • Gaming results show average frame rates in bold including minimum frame rates shown on the chart next to the averages in a smaller italics font where higher is better. Games benched with OCAT show average framerates but the minimums are expressed by frametimes (99th-percentile) in ms where lower numbers are better.
  • Windows 10 64-bit Pro edition; latest updates v2004. DX11 titles are run under the DX11 render path. DX12 titles are generally run under DX12, and seven games use the Vulkan API.
  • Latest DirectX
  • MSI’s Afterburner, 4.6.3 beta to set the RTX 3090’s power and temperature limits to their maximums
  • EVGA Precision X1 for its automatic scan

Games

Vulkan

  • DOOM Eternal
  • Red Dead Redemption 2
  • Ghost Recon: Breakpoint
  • Wolfenstein Youngblood
  • World War Z
  • Strange Brigade
  • Rainbow 6 Siege

DX12

  • Horizon Zero Dawn
  • Death Stranding
  • F1 2020
  • Mech Warrior 5: Mercenaries
  • Call of Duty Modern Warfare
  • Gears 5
  • Control
  • Anno 1800
  • Tom Clancy’s The Division 2
  • Metro Exodus
  • Civilization VI – Gathering Storm Expansion
  • Battlefield V
  • Shadow of the Tomb Raider
  • Project CARS 2
  • Forza 7

DX11

  • Crysis Remastered
  • A Total War Saga: Troy
  • Star Wars: Jedi Fallen Order
  • The Outer Worlds
  • Destiny 2 Shadowkeep
  • Borderlands 3
  • Total War: Three Kingdoms
  • Far Cry New Dawn
  • Assassin’s Creed Odyssey
  • Monster Hunter: World
  • Overwatch
  • Grand Theft Auto V

Additional Games

  • Fortnite RTX
  • Bright Memory Infinite RTX Demo
  • RTX Quake II

Synthetic

  • TimeSpy (DX12)
  • 3DMark FireStrike – Ultra & Extreme
  • Superposition
  • Heaven 4.0 benchmark
  • AIDA64 GPGPU benchmarks
  • Blender 2.90 benchmark
  • Sandra 2020 GPGPU Benchmarks
  • SPECworkstation3
  • Octane benchmark

Professional Applications

  • Black Magic Design DaVinci Resolve, supplied by NVIDIA
  • Blender 2.90
  • OTOY Octane Render 2020 1.5 Demo – 8K Redcode RAW projects

NVIDIA Control Panel settings

Here are the NVIDIA Control Panel settings.

We used MSI’s Afterburner to set all video cards’ power and temperature limits to maximum as well as for overclocking and to increase the RTX 3090’s voltage to its maximum for additional overclocking. We also used the latest EVGA Precision X1 tool to automatically scan. Please see the overclocking section for details.

By setting the Power Limits and Temperature limits to maximum for each card, they do not throttle, but they can each reach and maintain their individual maximum clocks. This is particularly beneficial for high power cards.

Let’s check out overclocking, temperatures and noise next.

Overclocking, Temperatures & Noise

All of our performance and overclocked testing are performed in a closed Phanteks Eclipse P400 ATX mid-tower case. Inside, the RTX 3090 is a very quiet card even when overclocked and we never needed to increase its fan speeds manually or change the stock fan profile. Compared with the RTX 2080 Ti which becomes loud when it ramps up, the RTX 3090 is quieter and can barely be heard over the other fans in our PC. We overclocked the RTX 3090 using Afterburner including adding .1mV more voltage.

We used Heaven 4.0 running in a window at completely maxed-out settings at a windowed 2560×1440 to load the GPU to 98% so we could observe the running characteristics of the RTX 3090 and also to be able to compare our changed clock settings with their results instantly. At completely stock settings with the GPU under full load, the RTX 3090 ran cool and stayed below 68C with clocks that averaged from about 1860MHz to 1890MHz.

Simply raising the Power and Temperatures to their maximums resulted in the clocks running at 1935MHz to1965MHz with no changes in temperatures whatsoever using the stock fan profile. In fact, we never needed to adjust the stock fan profile in a cool room.

Adding .1mV to the core clock for the RTX 3090 didn’t make any difference and the clocks continued to fluctuate around 1935MHz although the temperatures rose by 1 degree to 69C.

Next we set up Precision X1, and ran its automatic scan function.

Precision X1 suggested adding +79MHz to the core, but that was as over-optimistic as with the RTX 3080 and both cards crashed when we tried to apply it. We tested manual overclocking for hours but we were able to add only 55MHz to the core. We also found that we were able to increase the memory clocks by adding +1100MHz without artifacting, but it crashed with an offset of +1200MHz. Unfortunately, we could not combine the overclocks to reach an ideal +1100MHz memory and +55MHz offset to the core. It’s a matter of supplying more voltage to either the memory or to the core.

After testing multiple combinations, our RTX 3090’s final stable overclock to achieve the highest overall performance adds +40MHz offset to the core and +600 MHz to the memory. The RTX 3090 is power-limited, and to achieve a higher overclock will take more voltage than what adding .1mV can deliver.

Overclocking the RTX 3090 brought the clocks up a to a steady 1950MHz to 1965Hz. Interestingly, the GPU itself never became hot although the fan would automatically rise to around 70% in a very warm room; still very quiet. However, you may want to use oven mitts to remove the card if you shut down the PC and remove the card immediately after a long series of benching.

The RTX 3090 video card gets hot although its cooling system works perfectly to keep the GPU below 70C all of the time. So hot air will get dumped into your case’s airflow. Make sure it can handle it so you don’t overheat your other hardware components. This is why we don’t think watercooling will make any difference – except to a case’s interior temperatures if its airflow is already compromised.

To see the performance increase from overclocking, we tested 15 games at 2560×1440 and at 3840×2160 resolution. The results are given after the main performance charts in the next section. So let’s check out performance on the next page.

Performance summary charts & graphs

Main Performance Gaming Summary Charts

Here are the summary charts of 33 games and 3 synthetic tests. The highest settings are always chosen and the settings are listed on the chart. The benches were run at 2560×1440 and at 3840×2160 as it is pointless to test at 1920×1080 with such a powerful card. Five cards are compared and they are listed in order starting with the most powerful card on the left to the least powerful on the right: the RTX 3090, the RTX 3080, the RTX 2080 Ti, The RTX 2080 SUPER, and the GTX 1080 Ti.

Most results, except for synthetic scores, show average framerates, and higher is better. Minimum framerates are next to the averages in italics and in a slightly smaller font. Games benched with OCAT show average framerates, but the minimums are expressed by frametimes (99th-percentile) in ms where lower are better.

All of the games that we tested ran well except for A Total War Saga: Troy and we suspect that it still may be a game or driver issue. Control also had issues with setting the render resolution for 2560×1440. The Shadow of the Tomb Raider benchmark refused to run on the GTX 1080 Ti and would crash to desktop when we attempted to access the benchmark. We note that the RTX 3080 cannot run Ghost Recon: Breakpoint at 4K/Ultimate above a slideshow because it has 10GB of vRAM, and the game needs at least 11GB.

Although some games show less of a performance increase than others, it is a blowout and the RTX 3090 FE wins every game benchmark over the RTX 3080 never mind that it crushes the RTX 2080 Ti. The RTX 3090 is the first single-GPU card that is truly suitable for 4K/60+ FPS using ultra/maxed-out settings for most modern games. In fact, we will test 8K settings on a few select game that support it.

Now we look specifically at ten plus RTX/DLSS enabled games, each using maximum ray traced settings and the highest quality DLSS where available.

RTX/DLSS Benchmarks

The RTX 3090 maintains its performance dominance over the other cards and pulls further away when RTX/DLSS are enabled. We did not bother with the GTX 1080 Ti or the TITAN Xp results as they cannot run RTX features above 1080P.

Next, we look at overclocked performance.

Overclocked benchmarks

These 15 benchmarks are run with the RTX 3080 overclocked +40MHz on the core and +600MHz on the memory versus at stock clocks.

There is a small performance increase from overclocking the RTX 3090. We used the latest beta of Afterburner to increase the voltage to its maximum .1mV offset, and it slightly improved stability and performance, unlike with the RTX 3080. We won’t overclock the RTX 3080 in future as NVIDIA as has locked it down in an attempt to maximize performance for all Founders Edition gamers, but the RTX 3090 appears to overclock a little better. Let’s check out 8K gaming next.

8K Gaming

The RTX 3090 enables play, capture, and gaming in 8K HDR with DLSS 8K support, a single HDMI 2.1 cable for connectivity to 8K TVs and displays, GeForce Experience support for 8K HDR game capture, and AV1 decode for smooth playback of 8K HDR video. However, 8K gaming at 7860×4320 requires the GPU to draw 16 times the number of pixels as at 1080p and it needs high capacity vRAM to load assets and game data.

Driving 8K at 60 FPS requires drawing two billion pixels each second or four times the number of pixels at 4K. To help improve 8K performance, NVIDIA has introduced DLSS Ultra-Performance which delivers a 9x AI super resolution (1440p ? 8K) while maintaining good image quality.

Since we do not have an 8K display, we first tested synthetic 8K game benchmarks.

Of course, synthetic benchmarks are completely meaningless when looking for game performance, so we tried 8K gaming using NVIDIA’s DSR to simulate it from 4K. It works pretty well, but it takes around a 5% extra performance hit over using a native 8K display. 8K DSR still looks awesome and this image from Death Stranding was captured using DSR/8K with Quality DLSS. Just to upload it here, we had to scale it back down and compress it further.

Here are our 8K game benchmarks.

8K gaming is possible on the RTX 3090, but probably not at maximum settings. However, by lowering settings and by using the new Ultra Performance DLSS (or Uber in Youngblood), it is possible to game at 8K above 60 FPS now.

Let’s look at Creative applications next to see if the RTX 3090 is a good upgrade from the other video cards we test starting with Blender.

Blender 2.90 Benchmark

Blender is a very popular open source 3D content creation suite. It supports every aspect of 3D development with a complete range of tools for professional 3D creation.

We have seen Blender performance increase with faster CPU speeds, so we decided to try several Blender 2.90 benchmarks which also can measure GPU performance by timing how long it takes to render production files. We tested our six comparison cards with both CUDA and Optix running on the GPU instead of using the CPU.

For the following chart, lower is better as the benchmark renders a scene multiple times and gives the results in minutes and seconds.

Blender’s benchmark performance is highest using the RTX 3090, and often the amount of time saved is substantial over using the next fastest card, the RTX 3080. We also used Blender for rendering and the results are shown later on in this review.

Next we look at the OctaneBench.

OTOY Octane Bench

OctaneBench allows you to benchmark your GPU using OctaneRender. The hardware and software requirements to run OctaneBench are the same as for OctaneRender Standalone and we shall also use OctaneRender for a specific rendering test later, under “Professional Apps”.

We run OctaneBench 2020.1 for windows and here are the RTX 3090’s complete results and overall score of 652.30.

We compare with the score and results for the RTX 3080 – a hundred points less than with the RTX 3090 with 552.52.

Here is the summary chart comparing the RTX 3090, the RTX 3080, the RTX 2080 Ti, and the TITAN Xp.

The RTX 3090 is a beast of a card when used for rendering.

Next, we move on to AIDA64 GPGPU benchmarks.

AIDA64 v6.25

AIDA64 is an important industry tool for benchmarkers. Its GPGPU benchmarks measure performance and give scores to compare against other popular video cards.

AIDA64’s benchmark code methods are written in Assembly language, and they are well-optimized for every popular AMD, Intel, NVIDIA and VIA processor by utilizing the appropriate instruction set extensions. We use the Engineer’s full version of AIDA64 courtesy of FinalWire. AIDA64 is free to to try and use for 30 days. CPU results are also shown for comparison with the RTX 3090 GPGPU benchmarks.

Here is the chart summary of the AIDA64 GPGPU benchmarks with the RTX 3090, the RTX 3080, the RTX 2080 Ti, and the TITAN Xp side-by-side.

Generally the RTX 3090 is faster at almost all of AIDA64’s GPGPU benchmarks than the other cards including the RTX 3080, and overwhelmingly so over the other cards. So let’s look at Sandra 2020 next.

SiSoft Sandra 2020

To see where the CPU, GPU, and motherboard performance results differ, there is no better tool than SiSoft’s Sandra 2020. SiSoftware SANDRA (the System ANalyser, Diagnostic and Reporting Assistant) is a excellent information & diagnostic utility in a complete package. It is able to provide all the information about your hardware, software, and other devices for diagnosis and for benchmarking. Sandra is derived from a Greek name that implies “defender” or “helper”.

There are several versions of Sandra, including a free version of Sandra Lite that anyone can download and use. Sandra 2020 R10 is the latest version, and we are using the full engineer suite courtesy of SiSoft. Sandra 2020 features continuous multiple monthly incremental improvements over earlier versions of Sandra. It will benchmark and analyze all of the important PC subsystems and even rank your PC while giving recommendations for improvement.

The author of Sandra 2020 informed us that while NVIDIA has sent some optimizations, they are generic for all cards, not Ampere specific. The tensors for FP64 & TF32 have not been enabled (aka kind of FP32) in Sandra 2020 so GEMM & convolution running on tensors will get much faster using Ampere’s tensor cores. BF16 is supposed to be faster than FP16/half-float, but since precision losses are unknown it has not yet been enabled either. And finally, once the updated CUDA SDK for Ampere gets publicly released, performance should improve also.

With the above in mind, we ran Sandra’s intensive GPGPU benchmarks and charted the results summarizing them. The performance results of the RTX 3090 are compared with the performance results of the RTX 3080, the RTX 2080 Ti, and the TITAN Xp.

In Sandra GPGPU benchmarks, the RTX 3090 is faster than the RTX 3080 and it distinguishes itself from the RTX 2080 Ti and the TITAN Xp in every area – Processing, Cryptography, Financial and Scientific Analysis, Image Processing, and Bandwidth.

SPECworkstation3 (3.0.4) Benchmarks

All the SPECworkstation 3 benchmarks are based on professional applications, most of which are in the CAD/CAM or media and entertainment fields. All of these benchmarks are free except for vendors of computer-related products and/or services.

The most comprehensive workstation benchmark is SPECworkstation 3. It’s a free-standing benchmark which does not require ancillary software. It measures GPU, CPU, storage and all other major aspects of workstation performance based on actual applications and representative workloads. We only tested the GPU-related workstation performance as checked in the image above. We did not use SPECviewperf 13 since SPECviewperf 2020 is coming out in mid-October.

Here are our raw SPECworkstation 3.0.4.summaries and raw scores for the RTX 3090:

The benchmarks were unable to complete 3DSmax-06 and showcase-02, probably because of incompatibility with NVIDIA’s new DX12 driver. Here are the SPECworkstation3 results summarized in a chart along with the three competing cards, the RTX 3080, the RTX 2080 Ti, and the TITAN Xp. Higher is better since we are comparing scores.

The RTX 3090 is not a workstation card, yet it uses brute force to win most of the benches against the other three cards. However, we see in three benchmarks, the TITAN Xp blows past it. The TITAN Xp is a hybrid card that may have some optimizations for workstation applications and these optimizations can make a big difference to performance.

The RTX 3090 doesn’t offer any certifications for professional applications and it is not expected to be certified for them. It is expected that in workstation specific benchmarks, there will be cases where a TITAN, and especially a Quadro board, will outperform the GeForce class RTX 3080/3090 boards. We may expect that the RTX TITAN would be faster than the RTX 3080 when it has been optimized for certain apps, and Quadro is the king of the workstation cards since NVIDIA optimizes almost all workstation tasks for it. This is why professionals pay much more for Quadro than for any GeForce with otherwise equivalent raw performance.

However, let’s look at some professional applications where a large memory buffer makes a big performance improvement over having a smaller one.

Creative Applications with Large Memory Workloads

Rendering large models, detailed scenes, and high-resolution textures require powerful GPUs with a lot of vRAM. Render artists using the highest quality renders, require high capacity GPU memory which allows them to create more detailed final frame renders without needing to reduce the quality of their final output, or to split scenes into multiple renders which take a lot of extra time. Until now, no GeForce has been equipped with 24GB of vRAM – our comparison cards – the RTX 3080 has 10GB, the RTX 2080 Ti offers 11GB, and the TITAN Xp has 12GB. Let’s look at three pro apps that can use much more than 10GB and also test the render times. First up is OTOY OctaneRender.

OTOY OctaneRender

OctaneRender is the world’s first spectrally correct GPU render engine with built-in RTX ray tracing GPU hardware acceleration. The RTX 3090 allows large scenes to fit completely into the 24 GBs of GPU memory so out-of-core rendering is not necessary, providing faster than rendering times using out-of-core data for GPUs with lesser memory capacity. We tested the RTX 3090/24GB against the RTX 3080/10GB and against the RTX 2080 Ti/11GB and also the TITAN Xp/12GB.

Following NVIDIA’s very specific instructions, we rendered a very large detailed image. Looking closely, we see that out-of-core data was not needed since the entire render fit into the 24GB vRAM buffer, and the large image provided only took 45 seconds to render.

We tested the RTX 3080, and it took much longer at 8 minutes and 38 seconds, since 10GB of vRAM is insufficient to allow the render to fit into super-fast GPU memory, and the out-of-core data needed to be accessed from system memory.

Similar results were obtained from with the RTX 2080 Ti and the TITAN Xp, and the results are summarized in the chart below.

The TITAN Xp doesn’t handle ray traced rendering particularly well and the RTX 2080 Ti is faster than the RTX 3080 but none of these cards can match the fast rendering speeds of the RTX 3090. Let’s look at Blender next.

Blender

Blender is a popular free open source 3D creation suite that supports modeling, rigging, animation, simulation, rendering, compositing, motion tracking, video editing, and the 2D animation pipeline. NVIDIA’s OptiX accelerated rendering in Blender Cycles are used to accelerate final frame rendering and interactive ray-traced rendering in the viewport to give creators real-time feedback without the need to perform time-consuming test renders. The 24 GB framebuffer on the RTX 3090 allows it to perform final frame together with interactive renders that may fail due to a 10GB vRAM framebuffer on the RTX 3080.

This large render took 31.24 seconds using the RTX 3090 but it crashed when we tried fitting the scene into the RTX 3080’s framebuffer.

Here is the summary chart.

11GB is evidently enough vRAM to fit the entire image into the RTX 2080s framebuffer, and it managed to complete the render quickly, but about 13.5 seconds slower than the RTX 3090. The TITAN Xp is evidently showing its age – plus an inability to handle ray tracing very well – and it was very slow to render the scene although it has more vRAM than the RTX 2080 Ti.

Finally, we looked at Blackmagic Design DaVinci Resolve and 8K Redcode RAW projects.

Blackmagic Design DaVinci Resolve | 8K Redcode RAW projects

Blackmagic’s DaVinci Resolve combines professional 8K editing, color correction, visual effects and audio post production into one software package. With 8K projects featuring 8K REDCODE Raw (R3D), files will use most of the memory available on the RTX 3080 which result in out of memory errors particularly when intensive effects are added. Indeed, the RTX 3090/24GB was able to perform a very intensive LFB project quickly using an 8K R3D RED CAMERA clip on an 8K timeline with a temporal noise reduction processing effect applied. In contrast, the RTX 3080 just generated error messages which means that we would have to workaround – taking a lot of extra time and effort. There is really no quantitative benchmark here.

When looking at large framebuffer workloads, the comparison NVIDIA wants us to see is between the RTX 3080 and the RTX 3090. The older cards – the RTX 2080 Ti and the TITAN Xp – can run many of these workloads with various degrees of success without errors, but the point of highlighting these features between the RTX 3090/24GB and 3090/10GB is to help choose the right card to use based on the needs of a professional using Resolve or the other creative apps that we highlighted.

After seeing the totality of the benches, many creative users will probably upgrade their existing systems with a new RTX 30 series card based on the performance increases and the associated increases in productivity that they require. The question to buy the RTX 3090 or the RTX 3080 should be based on the workflow and requirements of each user as well as budget. Time is money depending on how these apps are used. If a professional needs a lot of framebuffer, the RTS 3090 is the logical choice. Hopefully the benchmarks that we ran may help you decide.

Let’s head to our conclusion.

This has been a very enjoyable exploration evaluating the new Ampere RTX 3080 versus the other cards we tested. The RTX 3090 performed brilliantly performance-wise compared to the RTX 2080 Ti – formerly the fastest gaming card in the world. It totally blows away its other competitors and it is much faster. The RTX 3090 at $1499 is the upgrade from the $1199 RTX 2080 Ti since the RTX 3080 gives about 20-25% improvement. If a gaming enthusiast wants the very fastest card – just as the RTX 2080 Ti was for the past two years, and doesn’t mind the $300 price increase – then it is the only choice for gaming – and especially as the only card that can run new 8K games at mostly high settings with DLSS 2.0.

NVIDIA says that the RTX 3080 is the gaming card and the RTX 3090 is the hybrid creative card – but we respectfully disagree. The RTX 3090 is the flagship gaming card that can also run intensive creative apps very well, especially by virtue of its huge 24GB framebuffer. But it is still not an RTX TITAN nor a Quadro. These cards cost a lot more and are optimized specifically for workstations and also for professional and creative apps.

However, for RTX 2080 Ti gamers who paid $1199 and who have disposable cash for their hobby – although it has been eclipsed by the RTX 3080 – the RTX 3090 Founders Edition which costs $1500 is the card to maximize their upgrade. And for high-end gamers who also use creative apps, this card may become a very good value. Hobbies are very expensive to maintain, and the expense of PC gaming pales in comparison to what golfers, skiers, audiophiles, and many other hobbyists pay for their entertainment. But for high-end gamers on a budget, the $699 RTX 3080 will provide the most value of the two cards. We cannot call the $1500 RTX 3090 a “good value” generally for gamers as it is a halo card and it absolutely does not provide anywhere close to double the performance of a $700 RTX 3080.

However, for some professionals, two RTX 3090s may give them exactly what they need as it is the only Ampere gaming card to support NVLink providing up to 112.5 GB/s of total bandwidth between two GPUs which when SLI’d together will allow them to access a massive 48GB of vRAM. SLI is no longer supported by NVIDIA for gaming, and emphasis will be placed on mGPU only as implemented by game developers.

Conclusion

We are very impressed with the Founders Edition of the RTX 3090 after spending more than 100 hours testing it over the past 6 days. It offers exceptional performance at Ultra 4K and and it even supports playable gaming at 8K. It stands alone as the fastest video card in the world. The Founders Edition of the RTX 3090 is well-built, solid, and good-looking, and it stays cool and quiet even when overclocked – the card gets hot, but never the GPU. The RTX 3090 Founders Edition offers a big performance improvement over any Pascal or Turing Founders Editions in every metric.

Pros

  • The RTX 3090 is the fastest video card in the world
  • The RTX 3090 at $300 more than the RTX 2080 Ti launched at is a jump in performance over all older cards
  • 24GB of vRAM allow for 8K gaming and it’s also very useful for intensive creative apps
  • Ray tracing is a game changer in every way
  • Ampere improves over Turing with AI/deep learning and ray tracing to improve visuals while also increasing performance with DLSS 2.0 and Ultra Performance DLSS
  • The RTX 3090 Founders Edition design cooling is quiet and efficient; the GPU in a well-ventilated case stays cool even when overclocked and it is quiet
  • The industrial design is eye-catching and it is solidly built

Con

  • Price. At $1500, the RTX 3090 is not a good value for gaming except as a multi-purpose halo card or for bragging rights

The Verdict:

If you are a gamer who also uses creative apps where saving time is important, you may do yourself a favor by upgrading to a RTX 3090. For high-end gamers with disposable income, the RTX 3090 is a true 4K/60+ FPS video card for most modern games, offers the highest performance as an upgrade from a RTX 2080 Ti, and it can even handle the demands of 8K gaming.

Stay tuned, there is a lot more on the way from BTR. Next up, we will test the RTX 3090 and the RTX 3080 in VR versus the RTX 2080 Ti using the Vive Pro with an ETA of early next week. Stay tuned to BTR!

Happy Gaming!

]]>