KUALA LUMPUR, Sept 29 (Bernama) -- Cloudflare Inc, a connectivity cloud company, announced its global network will deploy NVIDIA GPUs at the edge combined with NVIDIA Ethernet switches, putting artificial intelligence (AI) inference compute power close to users globally.
It will also feature NVIDIA’s full stack inference software including NVIDIA TensorRT-LLM and NVIDIA Triton Inference server, to further accelerate performance of AI applications, including large language models.
“With NVIDIA’s state-of-the-art GPU technology on our global network, we are making AI inference that was previously out of reach for many customers, accessible and affordable globally,” said Cloudflare chief executive officer and co-founder, Matthew Prince in a statement.
Meanwhile, NVIDIA vice president of Hyperscale and HPC, Ian Buck said: “With NVIDIA GPUs and NVIDIA AI software available on Cloudflare, businesses will be able to create responsive new customer experiences and drive innovation across every industry.”
All Cloudflare customers can now access local compute power to deliver AI applications and services using fast and more compliant infrastructure.
With this announcement, organisations will be able to run AI workloads at scale, and pay for compute power as needed, for the first time through Cloudflare.
By deploying NVIDIA GPUs to its global edge network, Cloudflare now provides low-latency generative AI experiences for every end user; access to compute power near wherever customer data resides; and affordable, pay-as-you-go compute power at scale.
Cloudflare’s connectivity cloud delivers the most full-featured, unified platform of cloud-native products and developer tools, so any organisation can gain the control they need to work, develop, and accelerate their business.
-- BERNAMA
No comments:
Post a Comment