HW News
05:14 | AMD & RTX GPUs Nearly Non-Existent in Steam Survey
The latest Steam hardware survey from August is now available, and in the light of Nvidia’s upcoming RTX 3000-series, it seems clear why the Turing cards got snubbed at Nvidia’s recent event. Turing was a slow burn that never got off the ground the way Pascal did, and looking at Steam’s survey data reinforces that.
Outside of that, these surveys also tend to highlight other interesting trends — or lack thereof — amongst PC users. While the surveys don’t show the whole picture, mostly because it can’t see how many RGB LEDs you have in your computer… yet, Steam does poll thousands of users for their hardware configurations, and the results provide a meaningful snapshot. Here are some key takeaways.
- Quad-core CPUs are still king, accounting for 45.76% of the survey.
- Most users (41.21%) are still running 16GB of RAM, with only ~9% having any configuration beyond 16GB. This is still a shift from a few years ago when 8GB was dominant. Today, if we expand the System RAM list on the hardware survey, you’ll see that 8GB is trending down by about the same rate as 16GB is trending up. 8GB is now at 32%.
- Still, the most popular resolution is 1920 x 1080, a resolution which Steam says accounts for over 65% of its users. The next most popular is 2560 x 1440, at a distant 6.59%. 1440p never quite got the same focus as 1080p and 4K, and so has been a weird middle-ground. 3840×2160 is just 2.24% of the market, up 0.01% since last period. UltraWide resolutions haven’t moved much, looking at the 3440×1440 result. Perhaps unsurprisingly, 1366×768 is captured by 10% of the market, with minimal movement. This is indicative of a lot of laptop users on Steam, either playing lighter-weight games or just installing it for the friends list.
- 8GB GPUs have overtaken 6GB GPUs, with a reported 22.73% of users having 8191 MB of VRAM. 6GB GPUs (6143 MB) came in at a close second, making up 21.69% of polled users.
- The most popular GPU is still Nvidia’s GTX 1060, with 10.75% of Steam users owning the card. The GTX 1060 has long since been the most popular card amongst Steam survey participants, even though its share has been slowly eroding, marking a 0.46% decline since April. Meanwhile, the most popular RTX 20-series card is the RTX 2060 — at a distant 2.88%. The RTX 2060 has been steadily moving up since April, making a .32% uptick in adoption. The RTX 2080 Ti still accounts for less than 1% of Steam users. The RTX 20 stats are further complicated by the fact that NVIDIA launched refreshes shortly after launch, and so ranks are shared with those.
- AMD is barely in the top 10, with the RX 580 in rank 9, and its next card is the RX 570, at rank 16. As for the RX 5700 XT, it doesn’t appear until way down the list, somehow below the 2080 Ti despite being about 1/3 of its price. The 5700 XT has about 0.88% of the market.
Source: https://store.steampowered.com/hwsurvey/Steam-Hardware-Software-Survey-Welcome-to-Steam
09:56 | The First Round of Displays Supporting Nvidia Reflex
As part of Nvidia’s RTX 3000-series announcement, it mentioned the new Nvidia Reflex feature, which is aimed at reducing end-to-end system latency. While Nvidia didn’t offer a lot of details on this initially, we speculated that it’s an initiative to reduce total end-to-end system latency as raw FPS scale to a point of being less important (i.e., 240 FPS vs 360 FPS).
Nvidia made it clear that the new Reflex feature would coincide with bringing new 360Hz G-Sync displays to market. Now, Nvidia and its primary display partners — Asus, Acer, Alienware, and MSI — are co-announcing the first round of those displays are coming later this fall. These are as follows: MSI Oculux NXG253R, Asus PG259QN, Alienware 25, and Acer Predator X25.
After learning more about Nvidia Reflex, we now know it’s composed of two primary components: The Nvidia Reflex SDK and the Nvidia Reflex Latency Analyzer. Of the SDK, Nvidia states that this is “A new set of APIs for game developers to reduce and measure rendering latency. By integrating directly with the game, Reflex Low Latency Mode aligns game engine work to complete just-in-time for rendering, eliminating the GPU render queue and reducing CPU back pressure in GPU intensive scenes. This delivers latency reductions above and beyond existing driver-only techniques, such as NVIDIA Ultra Low Latency Mode.”
At a high level, it seems that this SDK effectively takes what Nvidia’s Ultra Low Latency Mode (ULMB) does at the driver level, and bakes it directly into a game. The idea here is to speed up the overall rendering pipeline.
The Nvidia Reflex Latency Analyzer is a feature that will be baked directly into the G-Sync boards on G-Sync displays. G-Sync displays supporting Nvidia Reflex will come with a “Reflex Latency Analyzer USB port” for users to plug their mouse into. This is a passthrough port that monitors mouse clicks and measures the time it takes for a pixel change to happen on screen. This feature seems to be aimed at allowing users to measure system latency, as the Nvidia Reflex Analyzer will report mouse latency, PC + display latency, and system latency.
Additionally, peripheral makers such as Asus, Logitech, Razer, and SteelSeries, will be offering mice that will allow for measuring mouse latency. Nvidia will also be making a public database of average mouse latencies, and Nvidia Reflex software metrics will be incorporated into the GeForce Experience suite.
https://www.nvidia.com/en-us/geforce/news/g-sync-360hz-gaming-monitors
14:04 | HDMI Single-Cable 8K Support
NVIDIA made a big deal about 8K gaming for its RTX 3090, something we’ll hopefully be testing, but that brought about questions of cable support. In a follow-up architecture day event, NVIDIA noted that HDMI 2.1 enablement permits a single cable for 8K HDR TV support at 60Hz. NVIDIA noted that, previously, an 8K PC monitor would require 2x DisplayPort cables for 8K60 SDR, or 4x HDMI 2.0 for 8K60 HDR, but we are neither Jensen Huang nor Linus Sebastian, so we don’t have the monitors to have first-hand testing experience. Moving to HDMI 2.1 support on the RTX 3000 GPUs will eliminate the need for multiple cables to drive one display.
https://www.hdmi.org/spec/hdmi2_1
15:18 | PC Specific CPU Optimizations for Marvel’s Avengers
Marvel’s Avengers launched in full swing this past week, and Intel recently revealed that it collaborated with Crystal Dynamics to make CPU specific optimizations for the game. The optimizations no doubt focus on the physics engine for the game, with Intel’s marketing highlighting a few key points.
- “The force and shockwave of each special move will create more detailed rubble and debris”
- “With every powerful blow, stomp, blast or smash, you’ll see more persistent armor shards, in more detail, flying in more pieces and more places”
- “With the optimal balance of cores, threads and frequency, any interaction with water becomes a richer, more responsive experience. Water splashes and reacts as it naturally would in the real world.”
Neither Intel nor Crystal Dynamics divulged any real technical information. However, there’s a video that shows the enhanced enemy and environmental destruction, as well as the improved water simulation. The video compares frames rendered with these settings on versus off, and the improvements seem to be noteworthy.
Intel is also promising to support Marvel Avengers for the next two years, and is also planning to work on GPU-specific optimizations for the game as well, presumably for its upcoming Xe Graphics GPUs.
We’ve seen Intel add features like this in the past, like self-shading shadows in some racing games.
Source: https://www.intel.com/content/www/us/en/gaming/play-avengers.html
https://www.youtube.com/watch?v=DT-tyB7OIbs&feature=emb_logo
17:18 | Mellanox and Cumulus Become Nvidia Networking
Nvidia recently finished its $7B acquisition of Mellanox, and it seems to have wasted no time integrating it into the company and rebranding it. Mellanox will now be known as Nvidia Networking, and while Nvidia hasn’t expressly confirmed this, Mellanox’s updated website and Twitter account more or less do.
Nvidia also has a new Nvidia Networking Twitter account that merges Mellanox Technologies and Cumulus Networks. Nvidia acquired Cumulus Networks for an undisclosed sum back in May. Cumulus offers the Cumulus Linux OS for networking switches, of which Mellanox has been using since its Spectrum line of switches in 2016.
“NVIDIA Networking formerly Mellanox Technologies and Cumulus Networks. Ethernet and InfiniBand solutions that are turning the data center into one compute unit,” reads the Nvidia Networking Twitter page.
With both Mellanox and Cumulus under Nvidia’s roof, Nvidia can scale its HPC platform across not only its chips, but also software and networking.
Source: https://www.tomshardware.com/news/mellanox-technologies-absorbed-and-rebranded-as-nvidia-networking
18:59 | RTX 3000-Series Recap
The biggest news of the week was Nvidia finally taking the wraps off of its RTX 3000-series cards. We won’t go back over all the details here; we have both a video and an article on this. We will quickly recap the basics, though, which has mostly been aggregated into the table below.
Model |
GeForce RTX 3090 |
GeForce RTX 3080 |
GeForce RTX 3070 |
10496 |
8704 |
5888 |
|
Base Clock |
1.4GHz |
1.44Ghz |
1.5GHz |
Boost Clock |
1.7GHz |
1.71GHz |
1.73GHz |
VRAM |
24GB GDDR6X, 19.5Gbps |
10GB GDDR6X, 19Gbps |
8GB GDDR6, 16Gbps |
Memory Bandwidth |
935.8 GB/s |
760 GB/s |
512 GB/s |
Bus Width |
384-bit |
320-bit |
256-bit |
RT Cores |
82 |
68 |
46 |
Tensor Cores |
328 |
272 |
184 |
Architecture |
Ampere |
Ampere |
Ampere |
96 |
88 |
64 |
|
*Graphics Card Power |
350W |
320W |
220W |
Recommended PSU |
750W |
750W |
650W |
Manufacturing |
Custom Samsung 8nm |
Custom Samsung 8nm |
Custom Samsung 8nm |
DirectX 12 Support |
Yes |
Yes |
Yes |
Nvidia DLSS |
Yes |
Yes |
Yes |
PCIe Gen 4 |
Yes |
Yes |
Yes |
Launch Date |
09/24/2020 |
09/17/2020 |
10/2020 |
MSRP |
$1,500 |
$700 |
$500 |
*Nvidia lists power specs as “Graphics Card Power,” which isn’t necessarily the same as TGP or TDP. This number could represent GPU-only or GPU + memory, but we will validate this in our review. All numbers above are from NVIDIA’s official materials and have not yet been independently verified.
The most interesting aspect — to us, at least — is the new cooling design for FE cards, and what implications it will have. We suspect it will add a new angle to the air versus liquid cooling debate, and we plan to test this exhaustively.
The RTX 3000-series will see Nvidia reposition its product stack to reintroduce the RTX 3080 as the series’ flagship at $700, and the RTX 3090 will be a Titan-class card with a titanic $1500 price tag to match. Meanwhile, the RTX 3070 is supposed to be more powerful than the RTX 2080 Ti for $500. Nvidia recently showed some footage of Doom Eternal running at 4K with framerates well above 100FPS, captured on a RTX 3080.
Several AIB partners have also announced some of their initial models, as well. These will have more traditional cooling solutions.
Source: GN
21:54 | Intel Tiger Lake and Rebranding
Intel finally took the wraps off its much rumored 11-gen (Tiger Lake) mobile CPUs this past week. While the chips seemed impressive enough on their own, Intel couldn’t seem to steer the conversation away from AMD and its Ryzen 4000-series of APUs. We also have a video on the Tiger Lake announcement, so we’ll keep this segment brief.
Intel’s naming paradigms continue to get more convoluted, as its introducing even more product identifiers with Tiger Lake. To start, Tiger Lake will come in one of two packages, depending on the power target: UP3 or UP4. The UP3 package essentially succeeds the high-performance U-series, aimed at 12W to 28W. The UP4 package succeeds the Y-Series for 7W to 15W. Each package leverages a 10nm SuperFin processing die, and a 14nm I/O die. Tiger Lake will use Willow Cove cores and integrated Xe LP graphics or Intel UHD graphics.
Package |
Model |
GPU |
C/T |
EUs |
Cache |
Memory |
Power |
Base Freq |
Single Core Turbo |
Max Turbo |
GPU Freq |
UP3 |
i7-1185G7 |
Xe LP |
4/8 |
96 |
12MB |
DDR4-3200LPDDR4x-4266 |
12W-28W |
3.0 GHz |
4.8GHz |
4.3GHz |
1.35GHz |
i7-1165G7 |
Xe LP |
4/8 |
96 |
12MB |
DDR4-3200LPDDR4x-4266 |
12W-28W |
2.8GHz |
4.7GHz |
4.1GHz |
1.30GHz |
|
i5-1135G7 |
Xe LP |
4/8 |
80 |
8MB |
DDR4-3200LPDDR4x-4266 |
12W-28W |
2.4GHz |
4.2GHz |
3.8GHz |
1.30GHz |
|
i3-1125G4 |
UHD |
4/8 |
48 |
8MB |
DDR4-3200LPDDR4x-3733 |
12W-28W |
2.0GHz |
3.7GHz |
3.3GHz |
1.25GHz |
|
i3-1115G4 |
UHD |
2/4 |
48 |
6MB |
DDR4-3200LPDDR4x-3733 |
12W-28W |
3.0GHz |
4.1GHz |
4.1GHz |
1.25GHz |
|
UP4 |
i7-1160G7 |
Xe LP |
4/8 |
96 |
12MB |
LPDDR4x-4266 |
7W-15W |
1.2GHz |
4.4GHz |
3.6GHz |
1.1GHz |
i5-1130G7 |
Xe LP |
4/8 |
80 |
8MB |
LPDDR4x-4266 |
7W-15W |
1.1GHz |
4.0GHz |
3.4GHz |
1.1GHz |
|
i3-1120G4 |
UHD |
4/8 |
48 |
8MB |
LPDDR4x-4266 |
7W-15W |
1.1GHz |
3.5GHz |
3.0GHz |
1.1GHz |
|
i3-1110G4 |
UHD |
2/4 |
48 |
6MB |
LPDDR4x-4266 |
7W-15W |
1.8GHz |
3.9GHz |
3.9GHz |
1.1GHz |
Since AMD beat Intel to the punch when it came to PCIe 4.0 on desktops, Intel was keen to highlight Tiger Lake as the industry’s first PCIe 4.0 platform for laptops. Other connectivity options include WiFi 6 and Thunderbolt 4. Tiger Lake focuses on a number of improvements over Ice Lake, namely in higher sustained frequencies, improved memory support, better graphics performance, and a renewed focus on power efficiency aimed at improving battery life.
The arrival of Tiger Lake also marked some significant rebranding for Intel. Intel has essentially rebranded the U and Y-series, and trotted out new logos for its Iris and Core i-Series brands.It also debuted its Evo brand, which replaces Project Athena. Lastly, Intel has also switched up its company logo, noting that this was only the third time it has ever overhauled its logo in such a way.
Source: https://newsroom.intel.com/news-releases/11th-gen-tiger-lake-evo/#gs.f9p5wl
24:44 | TeamGroup Launches 15.3 TB SSD at $4,000
TeamGroup has now officially taken the most expensive “consumer” SSD crown. The company announced its newest line of spacious SSDs, the QX series. The inaugural TeamGroup QX SSD will offer a capacity of 15.3 TB in a 2.5” form factor and SATA III interface.
The QX SSD will use 3D QLC NAND from an unspecified provider, and a Phison E12DC controller, coupled with an SLC cache mode and DRAM buffer. TeamGroup is claiming speeds at 560 MB/s (read) and 480 MB/s (write), and a write life of 2560 TBW with a three-year warranty.
TeamGroup’s QX SSDs will be made to order, for an estimated price tag of $3990.
Source: https://www.teamgroupinc.com/en/news/ins.php?index_id=140
https://www.techradar.com/news/teamgroup-launches-consumer-ssd-with-not-so-consumer-capacity
25:54 | Nvidia’s RTX 3000-Series Reddit Q&A
Nvidia recently hosted a Q&A on Reddit, where it answered some common questions up its upcoming RTX 3000-series cards. Below we’ll detail a few of the more interesting points.
One of the bigger sticking points we’ve seen surrounds the fact that the RTX 3080 will come with a 10GB VRAM buffer, which is 1GB less than the GTX 1080 Ti.
“We’re constantly analyzing memory requirements of the latest games and regularly review with game developers to understand their memory needs for current and upcoming games. The goal of 3080 is to give you great performance at up to 4k resolution with all the settings maxed out at the best possible price.
In order to do this, you need a very powerful GPU with high speed memory and enough memory to meet the needs of the games. A few examples – if you look at Shadow of the Tomb Raider, Assassin’s Creed Odyssey, Metro Exodus, Wolfenstein Youngblood, Gears of War 5, Borderlands 3 and Red Dead Redemption 2 running on a 3080 at 4k with Max settings (including any applicable high res texture packs) and RTX On, when the game supports it, you get in the range of 60-100fps and use anywhere from 4GB to 6GB of memory.
Extra memory is always nice to have but it would increase the price of the graphics card, so we need to find the right balance.”
Nvidia also offered some clarification on whether Nvidia Reflex was software, hardware, or both.
“NVIDIA Reflex is both. The NVIDIA Reflex Latency Analyzer is a revolutionary new addition to the G-SYNC Processor that enables end to end system latency measurement. Additionally, NVIDIA Reflex SDK is integrated into games and enables a Low Latency mode that can be used by GeForce GTX 900 GPUs and up to reduce system latency. Each of these features can be used independently.”
With the RTX 3000-series, Nvidia and Microsoft will be bringing some improved I/O features to the PC, which are similar to what the Xbox Series X and PS5 will leverage. Nvidia answered several questions about RTX I/O, including how it will work with SSDs and Microsoft’s DirectStorage API.
“RTX IO allows reading data from SSD’s at much higher speed than traditional methods, and allows the data to be stored and read in a compressed format by the GPU, for decompression and use by the GPU. It does not allow the SSD to replace frame buffer memory, but it allows the data from the SSD to get to the GPU, and GPU memory much faster, with much less CPU overhead.”
Nvidia on support for RTX I/O and the DirectStorage API:
“RTX IO and DirectStorage will require applications to support those features by incorporating the new API’s. Microsoft is targeting a developer preview of DirectStorage for Windows for game developers next year, and NVIDIA RTX gamers will be able to take advantage of RTX IO enhanced games as soon as they become available.”
Nvidia also noted that while there isn’t a minimum SSD requirement to take advantage of RTX I/O, obviously faster SSDs will produce better results.
Source: https://www.reddit.com/r/nvidia/comments/ilhao8/nvidia_rtx_30series_you_asked_we_answered/
Source:Gamersnexus.net