Earlier this year, Chris Wright reflected on what comes next for telecommunication service providers in 2024 and with these initiatives in mind, we learned that a focus for telcos will be to embrace intelligent automation and AI technologies with an end goal of improving overall business and operational efficiency across network, security, infrastructure and applications in a hybrid environment. Since then, Red Hat has released a number of exciting projects at Red Hat Summit 2024 including the introduction of Red Hat Enterprise Linux (RHEL) AI, enhancements to Red Hat OpenShift AI and more that will provide telcos with powerful AI platforms to help improve operations, reduce costs, enhance customer service and drive innovation.
Today, I wanted to specifically highlight how a few of these AI-driven announcements can help telcos stay competitive with fraud detection, enhanced data management, autonomous networks and resource optimization, network traffic management, energy management, chatbots, personalized customer support, AI models in the radio access networks (RAN) and core and more – all while meeting the increased demands of a modern digital infrastructure.
Introducing Red Hat Enterprise Linux AI
This year Red Hat announced RHEL AI, a foundational model platform that enables users to more seamlessly develop, test and deploy generative AI (GenAI) models. RHEL AI brings together the open source-licensed Granite large language model (LLM) family from IBM Research, InstructLab model alignment tools based on the LAB (Large-scale Alignment for chatBots) methodology and a community-driven approach to model development through the InstructLab project. The entire solution is packaged as an optimized, bootable RHEL image for individual server deployments across the hybrid cloud and is also included as part of Red Hat OpenShift AI, Red Hat’s hybrid machine learning operations (MLOps) platform, for building, training, tuning and serving models at scale across distributed cluster environments.
RHEL AI will empower telcos by lowering the barriers to developing and experimenting with and fine-tuning AI models, making it far easier to start to plan for the potential of AI in their organizations without requiring a significant upfront investment in skills or hardware. The offering can also help improve overall telco customer experience by enabling a wider range of subject matter and domain experts, not just data scientists, to contribute to the training, tuning and the alignment of LLMs. This means that chatbots, code assistants, consumer services and more can be built using input from the people actually doing these tasks directly, without having to go through a data scientist workflow. This helps telcos look at AI in a new way, as something that can be experimented with quickly and efficiently, while also providing a pathway to building AI capabilities derived from their own unique knowledge and skills.
Revolutionizing telco operations with Red Hat OpenShift AI
At Red Hat Summit, we announced new capabilities for Red Hat OpenShift AI, an open hybrid AI and machine learning (ML) platform built on Red Hat OpenShift that enables telcos to create and deliver AI-enabled applications at scale across hybrid clouds. There’s a whole host of updates including:
- Model serving at the edge to remote locations using single-node Red Hat OpenShift where applicable for specific use cases (tech preview)
- Enhanced model serving with the ability to use multiple model servers to support both predictive and GenAI
- Improved model development and distributed model training
- Model monitoring visualizations for performance and operational metrics and more
All of these enhancements will provide faster innovation, increased productivity and the capacity to layer AI into telcos’ daily operations. But how?
Generally speaking, bringing model training and serving to the edge can be costly due to complexity with setup. But having a common platform like Red Hat OpenShift AI from the core to the edge will make this job more efficient with increased automation to help reduce operational expenditures. With intelligent automation, telcos can streamline their network operations, improve security measures and optimize resource allocation. This is particularly crucial as telcos face financial pressures from modernizing legacy systems and expanding 5G networks. With AI at the edge, telcos will benefit from faster data processing and real-time decision-making, which can significantly enhance the performance of AI-enabled applications and services. This approach supports the growing volume of machine-generated data and improves connectivity and analytics capabilities. Think about digital concierge services to business customers or telco use cases like traffic monitoring, load management, infrastructure tuning, network lifecycle management or power management.
If we look at RAN telcos will be able to leverage AI models in the RAN for a more dynamic control of frequencies, sectors, cells and base stations to go beyond just time-of-day presets. They can apply AI techniques for channel estimations and SON work.
Red Hat OpenShift AI also delivers MLOps by integrating AI and ML capabilities into the standard DevOps model, covering the entire lifecycle of AI models. This includes building, training, serving and monitoring AI models, with a focus on fine-tuning for accuracy, which is crucial for telcos.
Turkcell, a leading converged telecommunication and technology services providers, built its AI services architecture and application hub on Red Hat OpenShift to transform customer experiences, drive operational efficiencies and to bring a greater diversity of consumer and business innovations to market faster.
And we’re also working with our partners to more tightly integrate their leading AI technologies with Red Hat OpenShift AI. Let’s take a look at what we announced with AMD, Intel, NVIDIA and Run.ai.
Tight integration with AMD will help telcos enable their platforms with Red Hat OpenShift including use of the latest GPU sets. And by further integrating Red Hat OpenShift AI with Intel's AI hardware, telcos will be able to realize interoperability from the datacenter to the cloud to the edge with greater AI workload portability to build, run and deploy AI workloads wherever needed.
Red Hat also announced a collaboration with NVIDIA to enable users to combine AI models trained using Red Hat OpenShift AI with NVIDIA NIM microservices to develop GenAI-driven applications on a single, MLOps platform. This will help service providers use Red Hat OpenShift as a developer platform for building co-pilots such as customer service chat assistants, compliance checks, malware detection and more.
Red Hat is working with Run:ai to bring their resource allocation AI infrastructure management and GPU orchestration capabilities to Red Hat OpenShift AI. By streamlining AI operations and optimizing the underlying infrastructure, this collaboration will help service providers optimize their resources for building an intelligent infrastructure that can adapt and tune itself based on workloads like 5G core and RAN applications.
Greater automation for intelligent networks of the future
Red Hat Lightspeed is expanding across platforms to induce enterprise-ready AI for Red Hat OpenShift and Red Hat Enterprise Linux. This is intended to deliver intelligent, natural language processing capabilities to make Red Hat’s enterprise-grade Linux and cloud-native application platforms even easier to use for novice users and act as a force multiplier for experienced professionals.
This means that for many telcos using Red Hat OpenShift for managing and developing apps or RHEL as their operating system, they will now be able to create and deliver software faster with the help of GenAI. This work will also make IT and network operations more efficient through digital assistance, freeing up time for core or new business innovation. Telcos will be able to leverage these automation tools in the construction, deployment and maintenance of future-forward telco networks, propelling their teams to unprecedented levels of scale and efficiency.
Regulatory compliance without stifling innovation
Red Hat announced automated policy as code, a new capability coming to future versions of Red Hat Ansible Automation Platform. The capability will help enforce policies and compliance across hybrid cloud estates that increasingly include a varied and growing number of AI applications. Another step in automation maturity, policy as code will make it possible to adhere to changing internal or external requirements and better prepare for sprawling infrastructure in support of scaling AI workloads.
This means that telcos will benefit from:
- Tighter integration of policy engines and capabilities with Ansible Automation Platform to improve overall infrastructure security posture
- The ability to define policies in plain text that can be translated and implemented in their infrastructure - for example: do a compliance check on all my switches across the network
- An easier time meeting regulatory requirements and compliance
Red Hat has significantly improved its product portfolio to provide native AI capabilities for telcos to develop, train and deploy models both in the area of predictive and GenAI. We will continue to enhance this offering to make it easy for our customers to consume AI for their infrastructure and services.
À propos de l'auteur
Azhar Sayeed is responsible for developing and driving End-to-End solution architecture for Red Hat's Telcos and Communication Service Providers (CSPs) customers. He contributes to implementation architectures and develops solutions for OpenStack deployment for scale and hyperconvergence.
Parcourir par canal
Automatisation
Les dernières nouveautés en matière d'automatisation informatique pour les technologies, les équipes et les environnements
Intelligence artificielle
Actualité sur les plateformes qui permettent aux clients d'exécuter des charges de travail d'IA sur tout type d'environnement
Cloud hybride ouvert
Découvrez comment créer un avenir flexible grâce au cloud hybride
Sécurité
Les dernières actualités sur la façon dont nous réduisons les risques dans tous les environnements et technologies
Edge computing
Actualité sur les plateformes qui simplifient les opérations en périphérie
Infrastructure
Les dernières nouveautés sur la plateforme Linux d'entreprise leader au monde
Applications
À l’intérieur de nos solutions aux défis d’application les plus difficiles
Programmes originaux
Histoires passionnantes de créateurs et de leaders de technologies d'entreprise
Produits
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Services cloud
- Voir tous les produits
Outils
- Formation et certification
- Mon compte
- Assistance client
- Ressources développeurs
- Rechercher un partenaire
- Red Hat Ecosystem Catalog
- Calculateur de valeur Red Hat
- Documentation
Essayer, acheter et vendre
Communication
- Contacter le service commercial
- Contactez notre service clientèle
- Contacter le service de formation
- Réseaux sociaux
À propos de Red Hat
Premier éditeur mondial de solutions Open Source pour les entreprises, nous fournissons des technologies Linux, cloud, de conteneurs et Kubernetes. Nous proposons des solutions stables qui aident les entreprises à jongler avec les divers environnements et plateformes, du cœur du datacenter à la périphérie du réseau.
Sélectionner une langue
Red Hat legal and privacy links
- À propos de Red Hat
- Carrières
- Événements
- Bureaux
- Contacter Red Hat
- Lire le blog Red Hat
- Diversité, équité et inclusion
- Cool Stuff Store
- Red Hat Summit