Red Hat Enterprise Linux AI
Red Hat® Enterprise Linux® AI is a foundation model platform to seamlessly develop, test, and run large language models (LLMs) for enterprise applications.
Deploy with partners
Artificial intelligence (AI) model training requires optimized hardware and powerful computation capabilities. Get more from Red Hat Enterprise Linux AI by extending it with other integrated services and products.
Dell and Red Hat deliver a consistent AI experience through an optimized and cost-effective single-server environment.
Lenovo and Red Hat deliver optimized performance, helping customers quickly put AI initiatives into production.
NVIDIA and Red Hat offer customers a scalable platform that accelerates a diverse range of AI use cases with unparalleled flexibility.
How Red Hat Enterprise Linux AI works
Red Hat Enterprise Linux AI brings together:
- InstructLab model alignment tools, which open the world of community-developed LLMs to a wide range of users.
- A bootable image of Red Hat Enterprise Linux, including popular AI libraries such as PyTorch and hardware optimized inference for NVIDIA, Intel, and AMD.
- Enterprise-grade technical support and Open Source Assurance legal protections.
Red Hat Enterprise Linux AI allows portability across hybrid cloud environments, and makes it possible to then scale your AI workflows with Red Hat OpenShift® AI and to advance to IBM watsonx.ai with additional capabilities for enterprise AI development, data management, and model governance.
Take control of LLMs with open source
Generative AI (gen AI) is a catalyst for groundbreaking change, disrupting everything from how software is made to how we communicate. But frequently, the LLMs used for gen AI are tightly controlled, and cannot be evaluated or improved without specialized skills and high costs.
The future shouldn’t be in the hands of the few. With Red Hat Enterprise Linux AI and its open source approach, you can encourage gen AI innovation with trust and transparency, while lowering costs and removing barriers to entry.
You can even contribute directly to AI model development with InstructLab, a community-driven solution for enhancing LLM capabilities.
Features and benefits
LLMs for the enterprise
Tune smaller, purpose-built models with your own data.
Community collaboration
InstructLab makes it possible to simplify generative AI model experimentation and alignment tuning.
Cloud-native scalability
Red Hat Enterprise Linux image mode lets you manage your AI platform as a container image, streamlining your approach to scaling.
Acceleration and AI tooling
Open source hardware accelerators, plus the optimized deep learning features of PyTorch, support faster results.
Explore related resources
Article
RAG vs. fine-tuning
Article
What is InstructLab?
Product
Red Hat OpenShift AI
Overview
Red Hat Consulting: MLOps Foundation
Frequently asked questions
What is the difference between Red Hat Enterprise Linux AI and Red Hat Enterprise Linux?
Red Hat® Enterprise Linux® AI is a foundation model platform to develop, test, and run large language models (LLMs) for enterprise applications.
Red Hat Enterprise Linux is a commercial open source Linux distribution developed by Red Hat for the commercial market that provides a flexible and stable foundation to support hybrid cloud innovation.
Red Hat Enterprise Linux AI is delivered as a Red Hat Enterprise Linux bootable image that includes AI libraries, Granite 3.0 models, and InstructLab tooling.
Do I need to buy Red Hat Enterprise Linux to use Red Hat Enterprise Linux AI?
No, a Red Hat Enterprise Linux AI license is sufficient and includes all of the components needed.
What’s included in Red Hat Enterprise Linux AI?
Red Hat Enterprise Linux AI includes a bootable image of a Red Hat Enterprise Linux container image that includes:
- InstructLab model alignment tools.
- Open source-licensed Granite models.
- Pytorch and runtime libraries.
- Drivers for accelerators (NVIDIA, Intel, AMD).
What’s the difference between Red Hat Enterprise Linux AI and Red Hat OpenShift AI?
Red Hat Enterprise Linux AI provides out-of-the-box large language models and code language models in a single server for both development and inference along with the tools needed to customize them with customer data. Fully covered by the Open Source Assurance framework, this simplified approach to generative AI is designed to reduce risk and improve accessibility for developers and domain experts who may lack the data science expertise to tune models.
Red Hat OpenShift® AI provides all of the tools needed to help customers build AI-enabled applications at scale. Red Hat OpenShift AI offers a comprehensive, integrated MLOps platform to help manage the lifecycle of models, ensuring support for distributed compute, collaboration workflows, monitoring, and hybrid-cloud applications.
Red Hat OpenShift AI includes access to Red Hat Enterprise Linux AI, so teams can use the same models and alignment tools in their OpenShift AI architecture as well as benefit from additional enterprise MLOps capabilities.
How is Red Hat Enterprise Linux AI priced?
The Red Hat Enterprise Linux AI license is priced per accelerator.
Contact Sales
Talk to a Red Hatter
Reach out to our sales team below for Red Hat Enterprise Linux AI pricing information.
To learn more about our partnerships, visit our catalog page, visit our catalog page.