피드 구독

It is hard to imagine any modern computer system that hasn't been improved by the power of artificial intelligence (AI). For example, when you take a picture with your smartphone camera, on average more than twenty deep learning (DL) models spring into action, ranging from object detection to depth perception, all working in unison to help you take that perfect picture!

Business processes, productivity applications and user experiences can all be enhanced by using some form of AI, and few other technologies have grown with the same size, speed and reach. Like any other piece of technology, however, AI comes with its own risks, which, in this case, include security and safety and possibly even legal obligations. In this article, we’ll take a brief look at some of these safety and security concerns, particularly those involved with generative AI (gen AI), and how we can develop safer, more secure and more trustworthy AI systems.

Differentiating between security and safety

Like any computer system (hardware or software), AI systems can be used for nefarious purposes, such as jailbreaking, prompt injection, adversarial training and other things. AI systems bring a new paradigm to the industry, however—the concept of the safety of the output data. This is mainly because of the following:

  • AI output is often generated based on previous training of the model, and the quality of the output depends on the quality of the data used in training. Well-known models take pride in using as much data as is available, which is often measured by the number of tokens used to train the model. The theory is that the more tokens used, the more effective the model's training
  • Output from the model may be used to help make business, user and technical decisions. This poses the risk of financial losses as well as potentially having safety and legal implications. For example, there is no shortage of insecure code on the internet, so any model trained on it runs the risk of generating insecure code as a result. If this generated code is used directly in a software project, it could become an entirely new kind of supply chain attack

While some aspects of AI security and safety are entangled, most safety frameworks tend to deal with them separately. Safety standards for computers are a relatively new paradigm for most companies, and we are still trying to wrap our heads around them.

Safety considerations when using AI models

In a nutshell, gen AI models work by predicting the next word in a sentence. Though these models have evolved to be much more advanced, they still fundamentally operate on this principle. This means there are some interesting things to consider when talking about AI safety.

Garbage in, garbage out

Garbage in, garbage out is a very basic principle of computing that is still applicable to AI models, but in a slightly different way. A gen AI model “learns” from a particular  set of data in its training phase. Typically, this training phase is divided into two parts. The first part is the pre-training phase, where a large corpus of data is used, often obtained from the internet. The second part is the fine-tuning phase, where data that is specific to the model's purpose is used to make the model better at a more focused task or set of tasks. Some models may go through more than two phases, depending on the model's architecture and purpose.

As you might expect, training your model on data obtained in bulk from the internet—without filtering for sensitive, unsafe and offensive content—can have some unexpected and adverse results.

Models hallucinate

I often compare AI models to small children. When children don't know the answer to a question, they will often make up an entirely false, but convincing story. Models are similar in a lot of ways, but the result can be more dangerous or damaging, particularly when models generate answers that can have financial, social or security implications.

Safety testing and benchmarking

While the AI industry is still in its very nascent stages, there have been some proposals for benchmarking standards that we think are interesting and worth paying attention to:

Building guardrails

Guardrail applications and models use various methods to help make sure that the output of a model is in accordance with the set safety and security requirements. Various open source tools and projects exist that can help set up these guardrails. A guardrail is just another piece of software, however, and will come with its own risks and limitations. It is up to model creators to establish mechanisms to measure and benchmark the harmfulness of their models before putting them into production.

Why open source makes a difference

While the industry is still discussing what constitutes an open source model for AI and what that model should be, IBM and Red Hat are leading the way by implementing open standards and open data for the AI models we ship. This includes:

Red Hat is also a founding member of the AI Alliance. This is a collaborative network of companies, startups, universities, research institutions, government organizations and non-profit foundations that are at the forefront of AI technology, applications and governance. As part of this alliance, we are working to drive the creation of a truly open, safer and more secure AI environment—not only for our customers, but for the open source community as a whole.

Wrap up

Artificial intelligence is in its early stages of development, and it is essential for us to think about its security and safety now, rather than trying to bolt it on at later stages. Red Hat believes that this is one area of AI development where open source and open systems can make a profoundly important difference.

Learn more about RHEL AI


저자 소개

Huzaifa Sidhpurwala is a Senior Principal Product Security Engineer - AI security, safety and trustworthiness, working for Red Hat Product Security Team.

 
Read full bio
UI_Icon-Red_Hat-Close-A-Black-RGB

채널별 검색

automation icon

오토메이션

기술, 팀, 인프라를 위한 IT 자동화 최신 동향

AI icon

인공지능

고객이 어디서나 AI 워크로드를 실행할 수 있도록 지원하는 플랫폼 업데이트

open hybrid cloud icon

오픈 하이브리드 클라우드

하이브리드 클라우드로 더욱 유연한 미래를 구축하는 방법을 알아보세요

security icon

보안

환경과 기술 전반에 걸쳐 리스크를 감소하는 방법에 대한 최신 정보

edge icon

엣지 컴퓨팅

엣지에서의 운영을 단순화하는 플랫폼 업데이트

Infrastructure icon

인프라

세계적으로 인정받은 기업용 Linux 플랫폼에 대한 최신 정보

application development icon

애플리케이션

복잡한 애플리케이션에 대한 솔루션 더 보기

Original series icon

오리지널 쇼

엔터프라이즈 기술 분야의 제작자와 리더가 전하는 흥미로운 스토리