The software industry has started developing a vast array of artificial intelligence (AI) applications based on large language models (LLMs). While many security threats to LLMs are similar to those affecting traditional software, LLMs and their applications also face unique security risks due to their specific characteristics. These risks can often be mitigated or reduced by applying specific security architecture patterns. Here are 10 ways to mitigate and reduce security risks in LLM applications.
1. Identify, authenticate and authorize all the principals
This includes humans and agents that participate in the LLM application. Use sound authentication and authorization standards, such as OpenID Connect (OIDC) and OAuth2. Avoid allowing unauthenticated access or using API keys if possible.
2. Implement rate limiting
Leverage AI platform components like API gateways—for example, 3Scale APICast—and don’t reinvent the wheel. You can limit the number of requests that can be made to your LLM to 5 per second if you expect that only humans will access it.
3. Use open models
And deploy them locally or on your own cloud instances. Open models provide a level of transparency that closed models cannot provide. If your use case requires that you use a cloud model offered as a service, choose a trusted provider, understand its security posture and leverage any security features it provides. IBM Granite models are trustworthy and open enterprise models that you can fine-tune for your own purposes.
4. Validate LLM output
LLM output cannot be fully predicted or controlled. Use mechanisms to validate it before presenting it to users or using it as input for other systems. Consider using function calling and structured outputs to enforce specific formats. Additionally, leverage AI platform solutions like runtime guardrails, such as TrustyAI or sandboxed environments to enhance reliability and safety.
5. Use logging wisely
LLMs are non-deterministic, so having a log of the inputs and outputs of the LLM might help when you have to investigate potential incidents and suspicious activity. When logging data, be careful with sensitive and personally identifiable information (PII) and do a privacy impact assessment (PIA).
6. Measure and compare the safety of the models you choose
Some models respond with more hallucinations and harmful responses than others. This affects how much trust we can put on a model. The more harmful responses a model provides, the less safe the model is. The safety of a model can be measured and compared with the safety of other models. By doing this we know that the safety of the models we use is on par with the market and is generally what the users of the application expect. Remember that if you are fine-tuning a model independently of the fine-tuning data used, the safety of the resulting model might have changed. In order to measure the safety of a model, you can use open source software like lm-evaluation-harness, Project Moonshot or Giskard.
7. Use models from trusted sources and review their licensing
AI models are released under a variety of different software licenses, some much more restrictive than others. Even if you choose to use models provided by organizations you trust, take the time needed to review the license restrictions so you are not surprised in the future.
8. Data is crucial on LLM applications
Protect all data sources—such as training data, fine-tuning data, models and RAG data—against unauthorized access and log any attempts to access or modify it. If the data is modified, an attacker may be able to control the responses and behavior of the LLM system.
9. Harden AI components as you would harden traditional applications
Some key AI components may prioritize usability over security by default, so you should carefully analyze the security restrictions of every component you use in your AI systems. Review the ports that each component opens, what services are listening and their security configuration. Tighten these restrictions as needed to properly harden your AI application.
10. Keep your LLM system up to date
As your LLM system probably depends on many open source components, treat these as you would in any other software system and keep them updated to versions without known critical or important vulnerabilities. Also, where possible, try to stay aware of the health of the open source and upstream projects that create the components you are using. If you can, you should get involved and contribute to these projects, especially those that produce the key components in your system.
Conclusion
LLM applications pose specific security risks, many of which can be mitigated or eliminated using AI security architecture patterns we've discussed here. These patterns are often available through the AI platform itself. As a software architect or designer, it’s important to understand the platform's built-in functionality so you can avoid reinventing the wheel or adding unnecessary workload.
Red Hat OpenShift AI is a flexible and scalable AI and machine learning (ML) platform that enables enterprises to develop and deploy AI-powered applications at scale across hybrid cloud environments, and can help achieve these security objectives.
product trial
Red Hat OpenShift AI(자체 관리형) | 제품 체험판
저자 소개
Florencio has had cybersecurity in his veins since he was a kid. He started in cybersecurity around 1998 (time flies!) first as a hobby and then professionally. His first job required him to develop a host-based intrusion detection system in Python and for Linux for a research group in his university. Between 2008 and 2015 he had his own startup, which offered cybersecurity consulting services. He was CISO and head of security of a big retail company in Spain (more than 100k RHEL devices, including POS systems). Since 2020, he has worked at Red Hat as a Product Security Engineer and Architect.
유사한 검색 결과
채널별 검색
오토메이션
기술, 팀, 인프라를 위한 IT 자동화 최신 동향
인공지능
고객이 어디서나 AI 워크로드를 실행할 수 있도록 지원하는 플랫폼 업데이트
오픈 하이브리드 클라우드
하이브리드 클라우드로 더욱 유연한 미래를 구축하는 방법을 알아보세요
보안
환경과 기술 전반에 걸쳐 리스크를 감소하는 방법에 대한 최신 정보
엣지 컴퓨팅
엣지에서의 운영을 단순화하는 플랫폼 업데이트
인프라
세계적으로 인정받은 기업용 Linux 플랫폼에 대한 최신 정보
애플리케이션
복잡한 애플리케이션에 대한 솔루션 더 보기
오리지널 쇼
엔터프라이즈 기술 분야의 제작자와 리더가 전하는 흥미로운 스토리
제품
- Red Hat Enterprise Linux
- Red Hat OpenShift Enterprise
- Red Hat Ansible Automation Platform
- 클라우드 서비스
- 모든 제품 보기
툴
체험, 구매 & 영업
커뮤니케이션
Red Hat 소개
Red Hat은 Linux, 클라우드, 컨테이너, 쿠버네티스 등을 포함한 글로벌 엔터프라이즈 오픈소스 솔루션 공급업체입니다. Red Hat은 코어 데이터센터에서 네트워크 엣지에 이르기까지 다양한 플랫폼과 환경에서 기업의 업무 편의성을 높여 주는 강화된 기능의 솔루션을 제공합니다.