Skip to main content

This could be the reason you upgrade your GPU

The RTX 4080 in a running test bench.
Jacob Roach / Digital Trends

Now more than ever, the best graphics cards aren’t defined by their raw performance alone — they’re defined by their features. Nvidia has set the stage with DLSS, which now encompasses upscaling, frame generation, and a ray tracing denoiser, and AMD is hot on Nvidia’s heels with FSR 3. But what will define the next generation of graphics cards?

It’s no secret that features like DLSS 3 and FSR 3 are a key factor when buying a graphics card in 2024, and I suspect AMD and Nvidia are privy to that trend. We already have a taste of what could come in the next generation of GPUs from Nvidia, AMD, and even Intel, and it could make a big difference in PC gaming. It’s called neural texture compression.

Let’s start with texture compression

An opal material in Unreal Engine 5.
Epic Games

Before we can get to neural texture compression, we have to talk about what texture compression is in the first place. Like any data compression, texture compression reduces the size of textures by compressing the data, but it has a few unique elements compared to, for example, an image compression technique like JPEG. Texture compression trades visual quality for speed, while static compression techniques often optimize for quality over speed.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

This is important because game textures stay compressed until they’re rendered. They’re compressed in storage, compressed in memory and VRAM, and only decompressed when they’re actually rendered. Texture compression also needs to be optimized for random access, with rendering tapping different parts of the memory depending on the textures it needs at the time.

That’s done with block compression today, which basically takes a 4×4 block of pixels and encodes them down, hence the “block” name. Block compression has been around for decades. There are different formats — as well as techniques like Adaptive Scalable Texture Compression (ASTC) for mobile devices — but the core concept has stayed the same.

A weapon texture in Redfall.
Digital Trends

Here’s the issue — textures aren’t getting any smaller. Highly detailed game worlds call for highly detailed textures, putting more strain on your hardware to decode those textures, as well as on your memory and VRAM. We’ve seen higher memory requirements for games like Returnal and Hogwarts Legacy, and we’ve seen 8GB graphics cards struggle to keep up in games like Halo Infinite and Redfall. There’s also supercompression with tools like Oodle Texture — don’t confuse that with data compression via tools like Oodle Kraken — which compresses the already compressed textures for smaller download sizes. That needs to be decompressed by the CPU, putting more strain on your hardware.

The solution seems to be to throw AI at the problem, which is something Nvidia and AMD are both exploring right now, and it just might be the reason you buy a new graphics card.

The neural difference

Nvidia's research for neural texture compression.
Nvidia

In August last year, Nvidia introduced Neural Texture Compression (NTC) at Siggraph. The technique is able to store 16 times as many texels as typical block compression, resulting in a texture that’s four times larger in resolution. That’s not impressive on its own, but this part is: “Our method allows for on-demand, real-time decompression with random access similar to block texture compression on GPUs.”

NTC uses a small neural network to decompress these textures directly on the GPU, and in a time window that’s competitive with block compression. As the abstract says, “this extends our compression benefits all the way from disk storage to memory.”

Nvidia isn’t the only one. AMD just revealed that it will discuss neural block texture compression at this year’s Siggraph with a research paper of its own. Intel has addressed the problem, too, specifically calling out VRAM limitations when it introduced an AI-driven level of detail (LoD) technique for 3D objects.

Although these are just research papers, they’re all getting at neural rendering. Given how AI is sweeping the world of computing, it’s hardly surprising that AMD, Nvidia, and Intel are all looking for the next frontier in neural rendering. If you need more convincing, here’s what Nvidia CEO Jensen Huang had to say on the matter in a recent Q&A: “AI for gaming — we already use it for neural graphics, and we can generate pixels based off of few input pixels. We also generate frames between frames — not interpolation, but generation. In the future we’ll even generate textures and objects, and the objects can be of lower quality and we can make them look better.”

A rising tide

The Gigabyte GeForce RTX 4070 Ti Super AI Top graphics card showcased at Computex 2024.
Kunal Khullar / Digital Trends

At the moment, it’s impossible to say how neural texture compression will show up. It could be relegated to middleware, stuffed into a logo as your start up your game, and never given a second thought. It might never manifest as a feature that shows up in games, especially if there’s a better use for it elsewhere. Or it could be one of the key features that stands out in the next generation of graphics cards.

I’m not saying it will be, but clearly AMD, Nvidia, and Intel all recognize something here. There’s some balance between install size, memory demands, and the final quality of textures in a game, and neural texture compression seems like the key to give developers more room to play with. Maybe that leads to more-detailed worlds, or maybe there’s a slight bump in detail with much less demand on memory. That’s up to developers to balance.

There’s a clear benefit, but the requirements remain a mystery. So far, AMD hasn’t presented its research, and Nvidia’s research is based on the performance of an RTX 4090. In an ideal world, neural texture compression — or more accurately, neural decompression — would be a developer-facing feature that works on a wide range of hardware. If it’s as significant as some of these research papers suggest, though, it might be the next frontier for PC gaming.

I suspect this isn’t the last we’ve heard of it, at least. We’re standing on the edge of a new generation of graphics cards, from Nvidia’s RTX 50-series to AMD’s RX 8000 GPUs to Intel Battlemage. As we start to learn about these GPUs, I have a hard time imagining neural texture compression won’t be part of the conversation.

Jacob Roach
Lead Reporter, PC Hardware
Jacob Roach is the lead reporter for PC hardware at Digital Trends. In addition to covering the latest PC components, from…
AMD’s new Ryzen 9000 CPUs are only cheaper in spirit
The AMD Ryzen 5 9600X between two finger tips.

Following up on reviews for the Ryzen 5 9600X and Ryzen 7 9700X, a flurry of reports are coming out about AMD's pricing for its new Zen 5 CPUs. Across the lineup, which is sure to earn some spots among the best processors, AMD reduced prices compared to the previous generation. That's great. But it's not exactly an accurate picture of pricing right now.

First, the prices. You can see in the picture below the prices for the main four Zen 5 CPUs. Both Ryzen 9 models are $50 cheaper compared to their last-gen counterparts, the Ryzen 7 9700X is $40 cheaper, and the Ryzen 5 9600X is $20 cheaper. That's only true if you compare the list prices that AMD set. Ultimately, it's up to retailers to dictate the final price, which is something we saw in full effect with AMD's last-gen CPUs.

Read more
Intel is finally stepping up on its instability fiasco
Intel Core i5-13600K installed in a motherboard.

Last week, Intel quietly committed to extending the warranty on its 13th-gen and 14th-gen CPUs, but it wouldn't provide details at the time. Now, we know what processors are covered. Intel is extending the warranty on a large range of 13th-gen and 14th-gen CPUs by two years, giving customers a total of five years to file a warranty claim.

From a performance standpoint, Intel's latest 13th-gen and 14th-gen CPUs are among the best processors you can buy, but the chips have been in hot water as of late. A string of instability issues, and a lack of communication from Intel, have eroded trust not only in the CPUs, but also Intel as a brand. The warranty extension is an attempt by Intel to help restore that trust, as the narrative around Intel's instability troubles has only become more intense as time goes on.

Read more
Nvidia’s next-gen GPUs may be delayed due to ‘design flaws’
Nvidia introducing its Blackwell GPU architecture at GTC 2024.

Dark clouds are looming over the future of Nvidia's best graphics cards. According to a new report, Nvidia told some of its partners that it will be delaying its upcoming Blackwell GPUs, and is now aiming for an early 2025 release instead. Delays are one thing, but the cause is perhaps the most worrying part of it all -- design flaws. What does this mean for Nvidia's RTX 50-series?

This worrying report originates from The Information, which cites two sources who helped produce the Blackwell chip, as well as its server hardware. Bloomberg recounts that the chips may be delayed by over three months at this point. Nvidia is preparing B100 and B200 chips for some of the world's most prominent tech companies, including Meta, Google, and Microsoft, so these delays could hit pretty hard.

Read more