Generative AI

Video: Talk to Your Supply Chain Data Using NVIDIA NIM

NVIDIA operates one of the largest and most complex supply chains in the world. The supercomputers we build connect tens of thousands of NVIDIA GPUs with hundreds of miles of high-speed optical cables. We rely on hundreds of partners to deliver thousands of different components to a dozen factories to build nearly three thousand products. A single disruption to our supply chain can impact our ability to meet our commitments.

This four-minute video highlights how organizations can overcome operations complexities and deliver AI factories at extraordinary scale with an AI planner built with LLM NIMs, NVIDIA NeMo Retriever NIMs, and a cuOpt NIM.

Key Takeaways

  • The AI planner is an LLM-powered agent built on NVIDIA NIM – a set of accelerated inference microservices. A NIM is a container with pretrained models and CUDA acceleration libraries that’s easy to download, deploy, and operate on premises or in the cloud.
  • This planning example is built using:
    • LLM NIM to understand planners’ intentions and direct the other models
    • NeMo Retriever RAG NIM to connect the LLM to proprietary data
    • cuOpt NIM – for logistics optimization
  • For optimization AI, we used NVIDIA cuOpt, a state-of-the-art, GPU-accelerated combinatorial optimization library that holds an incredible 23 world-record benchmarks
  • With cuOpt as the optimization brain behind our agent, our operations team can analyze thousands of possible scenarios in real time using natural language inputs to talk to our supply chain data.
  • Our cuOpt-powered optimization can do this analysis in just seconds, enabling the speed and agility to respond to an ever-shifting supply chain.

Summary

Try NVIDIA cuOpt, NeMo Retriever, and LLM NIM microservices for free on the API catalog

Check out these resources to learn more about cuOpt: 

Discuss (0)

Tags