NVIDIA introduced Magnum IO, a software package that processes huge amounts of data in minutes instead of hours. The software package was developed in collaboration with IBM and Mellanox, among other firms.
Optimized to eliminate storage and input/output bottlenecks, Magnum IO delivers up to 20x faster data processing for multi-server, multi-GPU computing nodes when working with massive datasets to carry out complex financial analysis, climate modeling and other HPC (high performance computing) workloads.
“Processing large amounts of collected or simulated data is at the heart of data-driven sciences like AI,” said Jensen Huang, founder and CEO of NVIDIA, in a statement. “As the scale and velocity of data grow exponentially, processing it has become one of data centers’ great challenges and costs.
At the same time, the company announced the availability of a new type of GPU accelerator available through Microsoft Azure. The new supercomputer is capable of handling the most demanding artificial intelligence and computing applications and is ranked among the world’s fastest supercomputers. For the first time, customers will be able to rent a computer on demand right from their desktop and enjoy the capabilities of huge supercomputers.
Huang added: “Breakthroughs in machine learning and artificial intelligence are redefining scientific methods and enabling exciting opportunities across all architectures. The combination of Envidia’s GPU with Arm processors, Microsoft Azure cloud, and other industry partners allows for the long-awaited data processing waits as well as the huge costs that today have involved computing and making it accessible to all. ”