Technology Other News 07 Nov 2019 NVIDIA makes signifi ...

NVIDIA makes significant progress on AI at the Edge and Inference

PRESS RELEASE
Published Nov 7, 2019, 3:02 pm IST
Updated Nov 7, 2019, 3:02 pm IST
NVIDIA topped all five benchmarks for both data center-focused scenarios (server and offline).
NVIDIA GPUs accelerate large-scale inference workloads in the world’s largest cloud infrastructures, including Alibaba Cloud, AWS, Google Cloud Platform, Microsoft Azure and Tencent.
 NVIDIA GPUs accelerate large-scale inference workloads in the world’s largest cloud infrastructures, including Alibaba Cloud, AWS, Google Cloud Platform, Microsoft Azure and Tencent.

NVIDIA today posted the fastest results on new benchmarks measuring the performance of AI inference workloads in data centers and at the edge — building on the company’s equally strong position in recent benchmarks measuring AI training.

The results of the industry’s first independent suite of AI benchmarks for inference, called MLPerf Inference 0.5, demonstrate the performance of NVIDIA TuringTM GPUs for data centers and NVIDIA

 

XavierTM system-on-a-chip for edge computing.

MLPerf’s five inference benchmarks — applied across a range of form factors and four inference scenarios — cover such established AI applications as image classification, object detection and translation.

NVIDIA topped all five benchmarks for both data center-focused scenarios (server and offline), with Turing GPUs providing the highest performance per processor among commercially available entries. Xavier provided the highest performance among commercially available edge and mobile SoCs under both edge-focused scenarios (single-stream and multi-stream)

Highlighting the programmability and performance of its computing platform across diverse AI workloads, NVIDIA was the only AI platform company to submit results across all five MLPerf benchmarks. In July, NVIDIA won multiple MLPerf 0.6 benchmark results for AI training, setting eight records in training performance. NVIDIA GPUs accelerate large-scale inference workloads in the world’s largest cloud infrastructures, including Alibaba Cloud, AWS, Google Cloud Platform, Microsoft Azure and Tencent. AI is now moving tothe edge at the point of action and data creation. World-leading businesses and organizations, including

Walmart and Procter & Gamble, are using NVIDIA’s EGX edge computing platform and AI inference capabilities to run sophisticated AI workloads at the edge. All of NVIDIA’s MLPerf results were achieved using NVIDIA TensorRTTM 6 high performance deep learning inference software that optimizes and deploys AI applications easily in production from the data center to the edge. New TensorRT optimizations are also available as open source in the GitHub repository.

New Jetson Xavier NX

Expanding its inference platform, NVIDIA today introduced Jetson Xavier NX, the world’s smallest, most powerful AI supercomputer for robotic and embedded computing devices at the edge. Jetson Xavier NX is built around a low-power version of the Xavier SoC used in the MLPerf Inference 0.5 benchmarks.

Click on Deccan Chronicle Technology and Science for the latest news and reviews. Follow us on Facebook, Twitter

...




ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT