PALO ALTO, CALIF. -- Denodo, a leader in data management, announced that it has integrated NVIDIA NIM inference microservices into the Denodo Platform, as part of its focus on bolstering its rapidly expanding artificial intelligence (AI) capabilities.
The Denodo Platform's logical data management capabilities, combined with NVIDIA NIM, enable enterprise customers to:
· Rapidly curate and transform data into AI-ready pipelines to feed large language models (LLMs)
· Improve model accuracy and combat hallucinations by accessing trusted enterprise data through retrieval augmented generation (RAG) pipelines.
· Simplify secure, real-time access to distributed enterprise knowledge for generative AI applications
· Maintain data privacy/security and enforce granular access controls for AI models that access organizational data
· Fast-track AI/ML deployments from data prep to model scoring with NVIDIA NeMo, leveraging an integrated data fabric for enterprise data input into this process
NVIDIA NIM is a collection of cloud-native microservices that simplify and accelerate the deployment of generative AI models across a variety of environments, including the cloud, on-premises data centers, and workstations. It connects the power of the latest foundational AI models, securely deployed on NVIDIA’s accelerated infrastructure, with enterprise customers everywhere.
The Denodo Platform’s integration of NVIDIA NIM helps customers seamlessly leverage advanced AI capabilities within their data management workflows. It also helps enable enterprises to deploy and scale generative AI applications with unprecedented speed and efficiency. Key use cases include improved analytics and robust AI-driven insights across such verticals as financial services, healthcare, retail, and manufacturing, accelerating time to insight from diverse data sources. As an NVIDIA Metropolis partner, Denodo is working to deploy vision AI and vision language model (VLM) NIM microservices to streamline industrial processes and increase worker safety.
Using the Denodo Platform with NVIDIA NeMo can significantly boost SQL query generation accuracy for LLMs. RAG capabilities enable more trustworthy responses by enabling models to retrieve relevant knowledge from the data fabric before generating output, and the Denodo Platform helps to simplify and accelerate data access, while reducing errors, when models query the platform.
By integrating NVIDIA NIM, Denodo helps ensure that customers will be able to maintain full control over their AI deployments, whether on premises or in the cloud. The integration is set to deliver significant business benefits, including accelerated time to value and enhanced security for AI applications.
“I am thrilled by this integration, and I think it‘s a sign of the direction in which Denodo’s logical data management capabilities can take us,” said Narayan Sundar, Senior Director-Strategic Alliances at Denodo. “Denodo is at the forefront of supporting RAG-enabled GenAI applications with real-time, governed, trusted data from an organization’s vast data estates, and I look forward to seeing the innovations, as they start to emerge, from enterprise customers that leverage the Denodo Platform with NVIDIA NIM integration.”
|