フィードを購読する

Hybrid cloud environments give businesses the best of both worlds, combining on-premise and cloud resources to provide both flexibility and scalability. Organizations can further improve their hybrid cloud strategy with artificial intelligence (AI) and machine learning (ML).

AI and ML models can help teams make better decisions by providing insights, optimizing resources and improving application performance. Large language models (LLMs) can help automate workflows and decrease the need for manual work. Predictive models can analyze usage patterns to help keep cloud costs down and more efficiently allocate resources. Let’s explore how teams can use AI/ML to get the most out of their hybrid cloud environments. 

Automated resource allocation 

Resource allocation and optimization are essential in hybrid cloud environments. By automating and optimizing with AI/ML, businesses can maximize their resources and keep costs down. Predictive analytics can also analyze historical data and usage patterns to automatically anticipate resource needs to scale on-premise and cloud resources. AI/ML can distribute workloads across hybrid clouds so they run in the most cost effective and efficient locations based on latency, cost and availability.

Organizations can also achieve cost savings by dynamically adjusting resource allocation based on demand and predictive analytics while maintaining high service availability and application performance.

Enhanced security 

AI-powered security monitoring helps organizations more quickly identify known cyber threats in complex hybrid cloud environments. With AI and ML-driven threat detection, businesses can more efficiently analyze security logs and network traffic in real time, increasing the likelihood of spotting and responding to potential attacks before they cause harm. Anomaly detection algorithms add another layer of protection by identifying unusual activity and flagging potential breaches early, improving the overall security posture of your hybrid cloud infrastructure.

By analyzing datasets and identifying patterns, AI algorithms can help mitigate emerging threats that traditional security measures may otherwise miss.

Streamline application development and deployment

AI-powered tools can accelerate application development and deployment, helping teams bring products to market faster while reducing human error. Machine learning can also augment developer team resources by optimizing code, detecting bugs and even suggesting architectural improvements for apps running in hybrid cloud environments—all of which help productivity and efficiency.

RHEL AI and Granite large language models

Red Hat Enterprise Linux AI (RHEL AI) includes a subset of the open source Granite language and code models. Granite models are a series of LLMs developed by IBM to help power enterprise AI applications. They are designed to support generative AI (gen AI) use cases involving language and code, such as text generation, code completion and code translation. As part of RHEL AI, these models offer organizations cost and performance-optimized solutions for various AI use cases while providing Red Hat enterprise-grade technical support and our Open Source Assurance.

Application development use case: Code refactoring with Granite code models

Here's a use case demonstrating how Granite LLMs can assist with migrating applications to a public cloud. An enterprise can leverage a Granite model to refactor a legacy application or platform to improve performance, scalability and maintainability. In addition, moving a platform to a cloud environment can increase its resilience and make it more responsive.

In this example, a large financial institution has a risk assessment platform that evaluates client portfolios for compliance, fraud detection and credit risk. The application is built on outdated architecture, however, making it difficult to scale as data volume grows, expensive to maintain due to legacy dependencies and slow to process real-time risk assessments. Business leaders want to migrate the risk assessment platform to AWS, and, given the importance of this risk platform to their business operations, their approach calls for a refactoring migration.

What is refactoring?

Refactoring is the process of running applications on a cloud provider's infrastructure, which requires re-architecting applications to better suit the cloud environment. This approach involves modifying a large portion of the codebase in existing applications to take advantage of cloud-based features and their extra flexibility. A refactoring migration is more complex and resource-intensive than other cloud migration approaches, however, since any changes to the code base cannot impact the application's external behavior.

An open source solution 

Developer teams can integrate Granite models into their development environment to provide AI coding assistance. In tandem with an open source AI Code assistant, application developers can adopt Granite code models to assist in a refactoring migration. These models can run locally using tools like Ollama or InstructLab while teams work to modernize their applications for cloud environments. 

Once set up, developers simply select code that needs to be improved to see the Granite model's suggestions. The Granite model attempts to modernize syntax, extract methods, rename variables and more. This method helps speed up development while keeping human expertise at the forefront, with the developer team reviewing and accepting any suggested code changes.

An advantage of this is that developers can use these AI tools locally without potentially compromising sensitive information. This hybrid approach also makes coding more efficient, and the open source nature of Granite models provides transparency for stakeholders. See "Open source AI coding assistance with the Granite models" for details about how to set up an AI coding assistant using local Granite models to help with refactoring migrations, code completion, contextual documentation and debugging.

Benefits

Organizations can enhance their hybrid cloud deployments by incorporating AI in their development environment. In this particular use case, the benefits included:

  • Faster app deployment to a public cloud
  • Improved team efficiency through reduced operational overhead
  • Accelerated response times through automation
  • Reduced risk by minimizing human error
  • More consistent deployments

Finally, AI improves cost-effectiveness in hybrid cloud environments by optimizing resource allocation and helping identify other cost-saving opportunities.

Visit Red Hat Developer Hub

At Red Hat Developer Hub, we provide online resources for developers interested in building enterprise AI applications. You can even download pre-built, bootable images of RHEL AI, including the Granite family of open source LLMs we've talked about here.

resource

エンタープライズ AI を始める:初心者向けガイド

この初心者向けガイドでは、Red Hat OpenShift AI と Red Hat Enterprise Linux AI によって AI 導入をどのように加速できるのかについて説明します。

執筆者紹介

Adam Wealand's experience includes marketing, social psychology, artificial intelligence, data visualization, and infusing the voice of the customer into products. Wealand joined Red Hat in July 2021 and previously worked at organizations ranging from small startups to large enterprises. He holds an MBA from Duke's Fuqua School of Business and enjoys mountain biking all around Northern California.

Read full bio
UI_Icon-Red_Hat-Close-A-Black-RGB

チャンネル別に見る

automation icon

自動化

テクノロジー、チームおよび環境に関する IT 自動化の最新情報

AI icon

AI (人工知能)

お客様が AI ワークロードをどこでも自由に実行することを可能にするプラットフォームについてのアップデート

open hybrid cloud icon

オープン・ハイブリッドクラウド

ハイブリッドクラウドで柔軟に未来を築く方法をご確認ください。

security icon

セキュリティ

環境やテクノロジー全体に及ぶリスクを軽減する方法に関する最新情報

edge icon

エッジコンピューティング

エッジでの運用を単純化するプラットフォームのアップデート

Infrastructure icon

インフラストラクチャ

世界有数のエンタープライズ向け Linux プラットフォームの最新情報

application development icon

アプリケーション

アプリケーションの最も困難な課題に対する Red Hat ソリューションの詳細

Original series icon

オリジナル番組

エンタープライズ向けテクノロジーのメーカーやリーダーによるストーリー