Nvidia wants to be a cloud powerhouse. While its history may be in graphics cards for gaming enthusiasts, its recent focus has been on its data center GPUs for AI, machine learning inference, inference and visualization. Today, at its GTC conference, the company announced its latest RTX server configuration for Hollywood studios and others who need to quickly generate visual content.
A full RTX server pod can support up to 1,280 Turing GPUs on 32 RTX blade servers. That’s 40 GPUs per server, with each server taking up an 8U space. The GPUs here are Quadro RTX 4000 or 6000 GPUs, depending on the configuration.
“NVIDIA RTX Servers — which include fully optimized software stacks available for Optix RTX rendering, gaming, VR and AR, and professional visualization applications — can now deliver cinematic-quality graphics enhanced by ray tracing for far less than just the cost of electricity for a CPU-based rendering cluster with the same performance,” the company notes in today’s announcement.
All of this power can be shared by multiple users and the backend storage and networking interconnect is powered by technology from Mellanox, which Nvidia bought earlier this month for $6.9 billion. That acquisition and today’s news clearly show how important the data center has become for Nvidia’s future.
System makers like Dell, HP, Lenovo, Asus and Supermicro will offer RTX servers to their customers, all of which have been validated by Nvidia and support the company’s software tools for managing the workloads that run on them.
Nvidia also stresses that these servers would work great for running AR and VR applications at the edge and then serving the visuals to clients over 5G networks. That’s a few too many buzzwords, I think, and consumer interest in AR and VR remains questionable, while 5G networks remain far from mainstream, too. Still, there’s a role for these servers in powering cloud gaming services, for example.
Source link
No comments: