“Our customers are racing toward a future where every product and service will be touched and improved by AI,” Ian Buck, VP and GM of Accelerated Business at Nvidia, said. “The Nvidia TensorRT Hyperscale Platform has been built to bring this to reality — faster and more efficiently than had been previously thought possible.”
Jordi Ribas, corporate VP for Bing and AI Products at Microsoft, added: “Using Nvidia GPUs in real-time inference workloads has improved Bing’s advanced search offerings, enabling us to reduce object detection latency for images. We look forward to working with Nvidia’s next-generation inference hardware and software to expand the way people benefit from AI products and services.”
Chris Kleban, product manager at Google Cloud, also said that the company was “excited to support Nvidia’s Turing Tesla T4 GPUs on Google Cloud Platform soon.”
Server manufacturers including Cisco, Dell EMC, Fujitsu, HPE, IBM, Oracle and Supermicro plan to release servers with T4 GPU on board.
Elsewhere at GTC
The Tokyo conference was also the setting for several other Nvidia announcements, many of them related to its autonomous vehicles initiatives. Among the announcements was the news that NTT Group plans to use Nvidia’s AI platform based on Tensor Core GPUs as the common platform for its company-wide “corevo” AI initiative, and that Fujifilm will use a DGX-2 system for AI research.