Red Hat, Inc., the world’s leading provider of open source solutions, today announced the general availability of Red Hat Enterprise Linux (RHEL) AI across the hybrid cloud. RHEL AI is Red Hat’s foundation model platform that enables users to more seamlessly develop, test and run generative AI (gen AI) models to power enterprise applications. The platform brings together the open source-licensed Granite large language model (LLM) family and InstructLab model alignment tools, based on the Large-scale Alignment for chatBots (LAB) methodology, packaged as an optimized, bootable RHEL image for individual server deployments across the hybrid cloud.

Read CyberTech News : Wing Security Unveils SaaS Pulse: Free Continuous SaaS Security Tool

RHEL AI provides the ability for domain experts, not just data scientists, to contribute to a built-for-purpose gen AI model across the hybrid cloud, while also enabling IT organizations to scale these models for production through Red Hat OpenShift AI.”

Joe Fernandes

vice president and general manager, Foundation Model Platforms, Red Hat

While gen AI’s promise is immense, the associated costs of procuring, training and fine-tuning LLMs can be astronomical, with some leading models costing nearly $200 million to train before launch. This does not include the cost of aligning for the specific requirements or data of a given organization, which typically requires data scientists or highly-specialized developers. No matter the model selected for a given application, alignment is still required to bring it in-line with company-specific data and processes, making efficiency and agility key for AI in actual production environments.

Red Hat believes that over the next decade, smaller, more efficient and built-to-purpose AI models will form a substantial mix of the enterprise IT stack, alongside cloud-native applications. But to achieve this, gen AI needs to be more accessible and available, from its costs to its contributors to where it can run across the hybrid cloud. For decades, open source communities have helped solve similar challenges for complex software problems through contributions from diverse groups of users; a similar approach can lower the barriers to effectively embracing gen AI.

An open source approach to gen AI

These are the challenges that RHEL AI intends to address – making gen AI more accessible, more efficient and more flexible to CIOs and enterprise IT organizations across the hybrid cloud. RHEL AI helps:

  • Empower gen AI innovation with enterprise-grade, open source-licensed Granite models, and aligned with a wide variety of gen AI use cases.
  • Streamline aligning gen AI models to business requirements with InstructLab tooling, making it possible for domain experts and developers within an organization to contribute unique skills and knowledge to their models even without extensive data science skills.
  • Train and deploy gen AI anywhere across the hybrid cloud by providing all of the tools needed to tune and deploy models for production servers wherever associated data lives. RHEL AI also provides a ready on-ramp to Red Hat OpenShift AI for training, tuning and serving these models at scale while using the same tooling and concepts.

RHEL AI is also backed by the benefits of a Red Hat subscription, which includes trusted enterprise product distribution, 24×7 production support, extended model lifecycle support and Open Source Assurance legal protections.

RHEL AI extends across the hybrid cloud

Bringing a more consistent foundation model platform closer to where an organization’s data lives is crucial in supporting production AI strategies. As an extension of Red Hat’s hybrid cloud portfolio, RHEL AI will span nearly every conceivable enterprise environment, from on-premise datacenters to edge environments to the public cloud. This means that RHEL AI will be available directly from Red Hat, from Red Hat’s original equipment manufacturer (OEM) partners and to run on the world’s largest cloud providers,including Amazon Web Services (AWS), Google Cloud, IBM Cloud and Microsoft Azure. This enables developers and IT organizations to use the power of hyperscaler compute resources to build innovative AI concepts with RHEL AI.

Availability

RHEL AI is generally available today via the Red Hat Customer Portal to run on-premise or for upload to AWS and IBM Cloud as a “bring your own subscription” (BYOS) offering. Availability of a BYOS offering on Azure and Google Cloud is planned in Q4 2024 and RHEL AI is also expected to be available on IBM Cloud as a service later this year.  

Red Hat plans to further expand the aperture of RHEL AI cloud and OEM partners in the coming months, providing even more choice across hybrid cloud environments.

Supporting Quotes

Joe Fernandes, vice president and general manager, Foundation Model Platforms, Red Hat

“For gen AI applications to be truly successful in the enterprise, they need to be made more accessible to a broader set of organizations and users and more applicable to specific business use cases. RHEL AI provides the ability for domain experts, not just data scientists, to contribute to a built-for-purpose gen AI model across the hybrid cloud, while also enabling IT organizations to scale these models for production through Red Hat OpenShift AI.”

Hillery Hunter, CTO and general manager of innovation, IBM Infrastructure

“IBM is committed to helping enterprises build and deploy effective AI models, and scale with speed. RHEL AI on IBM Cloud is  bringing open source innovation to the forefront of gen AI adoption, allowing more organizations and individuals to access, scale and harness the power of AI. With RHEL AI bringing together the power of InstructLab and IBM’s family of Granite models, we are creating gen AI models that will help clients drive real business impact across the enterprise.”

Jim Mercer, program vice president, Software Development, DevOps & DevSecOps, IDC
The benefits of enterprise AI come with the sheer scale of the AI model landscape and the inherent complexities of selecting, tuning, and maintaining in-house models. Smaller, built-to-purpose, and more broadly accessible models can make AI strategies more achievable for a much broader set of users and organizations, which is the area that Red Hat is targeting with RHEL AI as a foundation model platform.

To share your insights, please write to us at news@intentamplify.com