KServe Providers Dish Up NIMble Inference in Clouds and Data Centers

NVIDIA NIM on open-source Kubernetes platforms from providers such as Canonical, Nutanix and Red Hat allows users to deploy large language models at scale with an API call.
by Adam Tetelman

Deploying generative AI in the enterprise is about to get easier than ever.

NVIDIA NIM, a set of generative AI inference microservices, works with KServe, open-source software that automates putting AI models to work at the scale of a cloud computing application.

The combination ensures generative AI can be deployed like any other large enterprise application. It also makes NIM widely available through platforms from dozens of companies, such as Canonical, Nutanix and Red Hat.

The integration of NIM on KServe extends NVIDIA’s technologies to the open-source community, ecosystem partners and customers. Through NIM, they can all access the performance, support and security of the NVIDIA AI Enterprise software platform with an API call — the push-button of modern programming.

Serving AI on Kubernetes

KServe got its start as part of Kubeflow, a machine learning toolkit based on Kubernetes, the open-source system for deploying and managing software containers that hold all the components of large distributed applications.

As Kubeflow expanded its work on AI inference, what became KServe was born and ultimately evolved into its own open-source project.

Many companies have contributed to and adopted the KServe software that runs today at companies including AWS, Bloomberg, Canonical, Cisco, Hewlett Packard Enterprise, IBM, Red Hat, Zillow and NVIDIA.

Under the Hood With KServe

KServe is essentially an extension of Kubernetes that runs AI inference like a powerful cloud application. It uses a standard protocol, runs with optimized performance and supports PyTorch, Scikit-learn, TensorFlow and XGBoost without users needing to know the details of those AI frameworks.

The software is especially useful these days, when new large language models (LLMs) are emerging rapidly.

KServe lets users easily go back and forth from one model to another, testing which one best suits their needs. And when an updated version of a model gets released, a KServe feature called “canary rollouts” automates the job of carefully validating and gradually deploying it into production.

Another feature, GPU autoscaling, efficiently manages how models are deployed as demand for a service ebbs and flows, so customers and service providers have the best possible experience.

An API Call to Generative AI

The goodness of KServe is now available with the ease of NVIDIA NIM.

With NIM, a simple API call takes care of all the complexities. Enterprise IT admins get the metrics they need to ensure their application is running with optimal performance and efficiency, whether it’s in their data center or on a remote cloud service — even if they change the AI models they’re using.

NIM lets IT professionals become generative AI pros, transforming their company’s operations. That’s why a host of enterprises such as Foxconn and ServiceNow are deploying NIM microservices.

NIM Rides Dozens of Kubernetes Platforms

Thanks to its integration with KServe, users will be able access NIM on dozens of enterprise platforms such as Canonical’s Charmed KubeFlow and Charmed Kubernetes, Nutanix GPT-in-a-Box 2.0, Red Hat’s OpenShift AI and many others.

“Red Hat has been working with NVIDIA to make it easier than ever for enterprises to deploy AI using open source technologies,” said KServe contributor Yuan Tang, a principal software engineer at Red Hat. “By enhancing KServe and adding support for NIM in Red Hat OpenShift AI, we’re able to provide streamlined access to NVIDIA’s generative AI platform for Red Hat customers.”

“Through the integration of NVIDIA NIM inference microservices with Nutanix GPT-in-a-Box 2.0, customers will be able to build scalable, secure, high-performance generative AI applications in a consistent way, from the cloud to the edge,” said the vice president of engineering at Nutanix, Debojyoti Dutta, whose team contributes to KServe and Kubeflow.

“As a company that also contributes significantly to KServe, we’re pleased to offer NIM through Charmed Kubernetes and Charmed Kubeflow,” said Andreea Munteanu, MLOps product manager at Canonical. “Users will be able to access the full power of generative AI, with the highest performance, efficiency and ease thanks to the combination of our efforts.”

Dozens of other software providers can feel the benefits of NIM simply because they include KServe in their offerings.

Serving the Open-Source Community

NVIDIA has a long track record on the KServe project. As noted in a recent technical blog, KServe’s Open Inference Protocol is used in NVIDIA Triton Inference Server, which helps users run many AI models simultaneously across many GPUs, frameworks and operating modes.

With KServe, NVIDIA focuses on use cases that involve running one AI model at a time across many GPUs.

As part of the NIM integration, NVIDIA plans to be an active contributor to KServe, building on its portfolio of contributions to open-source software that includes Triton and TensorRT-LLM. NVIDIA is also an active member of the Cloud Native Computing Foundation, which supports open-source code for generative AI and other projects.

Try the NIM API on the NVIDIA API Catalog using the Llama 3 8B or Llama 3 70B LLM models today. Hundreds of NVIDIA partners worldwide are using NIM to deploy generative AI.

Watch NVIDIA founder and CEO Jensen Huang’s COMPUTEX keynote to get the latest on AI and more.