AI data cloud company Snowflake revealed a new partnership with Nvidia at Snowflake Summit 2024. This partnership will allow customers and partners to develop customized AI data applications in Snowflake, powered by Nvidia AI. 

TAIWAN-TECHNOLOGY-FOXCONN-NVIDIA
The logo of Nvidia is seen during the Hon Hai Tech Day in Taipei on October 18, 2023.
(Photo : I-HWA CHENG/AFP via Getty Images)

Snowflake Collaborates With Nvidia 

In this collaboration, Snowflake has integrated Nvidia AI Enterprise software, incorporating NeMo Retriever microservices into Snowflake Cortex AI. This integration facilitates the seamless connection of custom models to diverse business data, enhancing the delivery of precise responses. 

Additionally, Snowflake Arctic, an enterprise-grade large language model (LLM), is now fully supported with Nvidia TensorRT-LLM software, ensuring highly optimized performance. Arctic is also accessible as a Nvidia NIM inference microservice, expanding its availability to more developers.

Snowflake CEO Sridhar Ramaswamy emphasized the transformative impact of combining Nvidia's accelerated computing and software with Snowflake's advanced AI capabilities, describing it as game-changing. 

The collaboration aims to usher in a new era of AI, enabling customers of all skill levels and industries to develop custom AI applications effortlessly and confidently.

Jensen Huang, founder and CEO of Nvidia, highlighted the significance of data in the AI revolution, expressing confidence that the partnership with Snowflake will enable enterprises to refine their proprietary data and unlock its potential through generative AI.

Integrating Nvidia AI Into Cortex AI

Snowflake and Nvidia are working together to integrate Nvidia AI Enterprise software capabilities into Cortex AI, empowering business users to build and utilize bespoke AI-powered applications effectively. 

This integration includes NeMo Retriever, which facilitates high-accuracy information retrieval for enterprises within Cortex AI, and Nvidia Triton Inference Server, enabling seamless deployment and scaling of AI inference.

Moreover, Nvidia NIM inference microservices, part of Nvidia AI Enterprise, can now be launched directly in Snowflake as a native app powered by Snowpark Container Services. This feature simplifies the deployment of foundation models in Snowflake, enhancing convenience for organizations.

Quantiphi, an AI-centric digital engineering firm and a partner of both Snowflake and Nvidia at the "Elite" tier, is among the AI service providers utilizing Snowpark Container Services to construct Snowflake Native Apps.

These applications, including baioniq and Dociphi, are tailored to specific business roles to expedite industry-specific scenarios and everyday operations. Developed with the Nvidia NeMo framework, they will be accessible on the Snowflake Marketplace, enabling users to deploy them seamlessly within their Snowflake environment.

Read Also: AMD Unleashes Next-Gen AI Chips, Challenging Nvidia for AI Hardware Supremacy

Snowflake Arctic

Moreover, Snowflake Arctic, an advanced large language model (LLM) trained on Nvidia H100 Tensor Core GPUs, is now accessible as an Nvidia NIM, granting users immediate access to the Arctic. 

The Arctic NIM is currently live on the Nvidia API catalog, where developers can access it using complimentary credits. Additionally, it will be offered as a downloadable NIM.

"Data is the essential raw material of the AI industrial revolution," Huang said in a press statement. "Together, NVIDIA and Snowflake will help enterprises refine their proprietary business data and transform it into valuable generative AI."

Related Article: Nvidia Unveils AI Chatbot 'G-Assist' That Guides Gamers Through Games and Even Optimizes PCs

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion