Sunday, April 20th 2025

AMD Readies Radeon PRO W9000 Series Powered by RDNA 4
AMD is readying a new line of professional graphics cards based on its latest RDNA 4 graphics architecture. The company has assigned the silicon variant "Navi 48 XTW" to power its next flagship pro-vis product, which will likely be branded under the Radeon PRO W9000 series. According to the source of this leak, the card comes with 32 GB of memory, which is probably ECC GDDR6, across the chip's 256-bit wide memory bus. The product should offer the same core-configuration as the Radeon RX 9070 XT gaming GPU, with 64 compute units worth 4,096 stream processors, 128 AI accelerators, 64 RT accelerators, 256 TMUs, and 128 ROPs.
Besides professional visualization, AMD could target the AI acceleration crowd. The company is hosting the "Advancing AI" press event in June, where it is widely expected to announce its next-generation AI GPUs and updates to ROCm. It could also use the occasion to unveil the Radeon PRO W9000 series product, promoting them to the AI acceleration crowd.
Sources:
VideoCardz, Hoang Anh Phu (Twitter)
Besides professional visualization, AMD could target the AI acceleration crowd. The company is hosting the "Advancing AI" press event in June, where it is widely expected to announce its next-generation AI GPUs and updates to ROCm. It could also use the occasion to unveil the Radeon PRO W9000 series product, promoting them to the AI acceleration crowd.
45 Comments on AMD Readies Radeon PRO W9000 Series Powered by RDNA 4
-Also, here's the 32GB Navi 48 everyone wanted :laugh:
What's expected pricing? $2,999? W7800 was $2,499, according to the TPU GPU...
I don't get what's so hard about this. Programable GPUs kinda kicked off the nvidia 6800 Ultra. That was the end of GPU for gaming. CUDA kicked off the the 8800 GTX and gaming has been a distant second since then. Gaming doesn't matter. PC gaming especially is the pitty fuck at best. The only gaming things out now are moves to the cloud and slapping RGB on everything. Outside of those two things nothing is for gamers.
Too bad it will cost double or more!
If all you need is creating some images or running widely-used LLMs, then it works after a bit of fussing around.
But it definitely doesn't work well in the sense of professional usage as the GPU in the post is meant to be used for. You can get the runtime on windows itself and run some stuff on top of it.
Anything more intricate it going to be awful and Linux is the preferred environment for that anyway.
Nonetheless, lots of that stuff is slowly fading away from windows as well, even when it comes to CUDA. Tf does not support windows with GPUs anymore, and Nvidia recommends people to use WSL whenever possible.
Still using my amazing Radeon Pro VII
I also have a Radeon Pro w5000 and w6000, best to wait 2 years for a good price drop haha.
I always wonder why workstation gpus are always price so higher, anyone tell me please, because it doesn't make any sense.
igor's right, though. I don't think Windows has much of a future for this sort of workload, over the long term. It's just too inflexible, and the use case for these workloads is niche enough that maintaining support for Windows feels like a hard sell. Presumably Microsoft agrees, given their WSL push over recent years.
In Linux, I can do goofy shit like pairing an AMD GPU wth an Nvidia GPU and use both to share a single AI inference task, through Vulkan. It ain't especially fast, but it also isn't painfully slow--and it's hilarious that I can do it at all. Hypothetically I could toss an Arc GPU in there and run inference on three different GPUs, from three different companies, using three different drivers. What's even funnier is that this clownish scheme is actually easier to set up than ROCm, in many cases.
It'll be interesting to see how AMD's software ecosystem evolves after UDNA. Will they jettison older architectures? I hope not, but it wouldn't shock me.
and what makes you think that this will be "easy to get"? Like I said, largely due to driver validation. Since they're aimed at businesses, the large margins also help fund R&D for future architectures Haha, wow, that's insane. I kinda want to see it now
You make a comparison that card X will be better for a lot of tasks, my point is that that does not matter as card X is unavailable and this will thus sell like hotcakes as a result.... which you support by insinuating this will be hard to get as well....
There are however apps on windows that support ROCm Like LM studio. I asked for this also would like to see them add LM studio and pick a standard set of LLMs
8B to 14B models for lower vram cards so 8GB to 16GB
27B+ for cards with 24GB of vram and above.
Then have a generic query.
I was compiling something like this from info I found on the internet awhile ago but haven't gone back to it yet.
At $2,999, there is no justification to get a Navi 48 card, even if buffed with 32 GB, when the RTX 5090 exists at the $1,999-$2,999 range, unless you're bound by a support contract, that was my point.
If this card turns out to be $1,499, I see many use cases where its performance figures are satisfactory and in fact, even a passable AMD alternative for high-end gaming at the 32 GB level. For its intended business use, if exclusively targeting the niche, a $1,999 price would be acceptable if ECC memory is absolutely required, otherwise, the 5090 has better creator chops than the Navi 48 chip itself, regardless of what driver stack is supporting it.