Jump to section

Validated models by Red Hat AI

Validated models by Red Hat® AI offer confidence, predictability, and flexibility when deploying third-party generative AI models across the Red Hat AI platform.

stacks of cubes break out of an ellipse with a cloud and sparkles hovering

What makes these validated models so special?

With so many large language models (LLMs), inference server settings, and hardware accelerator options, it’s hard to find the right combination for performance, accuracy, and cost for your use case. 

With the latest updates in Red Hat AI 3.3, our collection of validated models make that choice easier. Our repository of third-party models are validated to run efficiently across the Red Hat AI platform, and now include a new batch of high-performing models. 

New models such as IBM Granite 4 and Apertus 8B prioritize transparency and auditability. Mistral Large 3 helps those requiring data sovereignty in Europe. And NVIDIA’s own Nemotron model family ensures peak performance for customers who favor NVIDIA infrastructure. 

Validated AI models with Red Hat AI. Video duration: 2:19

Features and benefits

Increased flexibility

Access the collection of validated and optimized models ready for inference—hosted on Hugging Face—to reduce time to value, promote consistency, and increase reliability of your AI apps.

Optimized Inference

Optimize your AI infrastructure by choosing the right model, deployment settings, and hardware accelerators for a cost-effective, efficient deployment that aligns with your enterprise use cases.

Improved confidence

Access industry benchmarks, accuracy evaluations, and model optimization tools for evaluating, compressing, and validating Third Party Models across various deployment scenarios.

Get more from your models

Red Hat AI model validation is done using open-source tooling such as GuideLLM, Language Model Evaluation Harness, and vLLM to ensure reproducibility for customers.

Validated models

These aren't just any LLMs. We have tested Third Party Models using realistic scenarios to understand exactly how they will perform in the real world. We use specialized tooling to assess LLM performance across a range of hardware.

Optimized models

Compressed for speed and efficiency. These LLMs are engineered to run faster and use fewer resources without sacrificing accuracy when deploying on vLLM. 

  • LLM Compressor is an open source library that includes the latest research in model compression in a single tool, enabling easy generation of compressed models with minimal effort.
  • vLLM is the leading open source high-throughput and memory-efficient inference and serving engine for optimized LLMs.

Build the solutions you need with Red Hat AI

Red Hat AI is the open source AI platform that works the way you do. Reduce costs with efficient models, customize them with your data and domain expertise, and deploy and manage workloads consistently across any infrastructure. All with tools designed to help your teams collaborate and scale.

Console ui image

Frequently asked questions

Where can I find the validated models?

The validated models are available on the Red Hat AI Ecosystem Catalog and the Red Hat AI repository on Hugging Face. The latter includes full model details, SafeTensor weights, and commands for quickly deploying on Red Hat AI Inference Server, RHEL AI, and Red Hat OpenShift AI.

How often do you add new validated models?

Red Hat intends to release a new set of validated models on a monthly basis following the cadence of upstream vLLM releases. Red Hat reserves the right to stop validating models for any reason.

Can you explain the validated model lifecycle?

Selected models will be validated for n+2 vLLM minor versions forward at a minimum, in good faith. For each model that we validate on a vLLM version, we will strive to offer forward compatibility of that model for at least the two next versions of vLLM.

Are these validated and optimized models fully supported by the Red Hat Support team?

No, Third Party Models are not supported, indemnified, certified or guaranteed in any way by Red Hat. Additionally, capacity guidance is simply guidance not a guarantee of performance or accuracy. For more details on the license of a specific model, contact the model provider.

How do I get personalized LLM deployment, configuration, and hardware accelerator guidance for my enterprise use case?

Send inquiries to validated-models@redhat.com for more information.

Keep learning

How to get started with AI at the enterprise

New validated models support predictable AI at scale

4 considerations for choosing the right AI model