フィードを購読する
AI/ML 

generative AI (gen AI) model, such as ChatGPT, is trained using a variety of text from the internet. However, artificial intelligence (AI) doesn't have specific information about particular documents or sources used during its training. Instead of focusing on specifics, the model is trained to be general, allowing it to come up with creative answers, engage in complex conversations and even display a sense of humor. AI doesn't possess comprehension, understanding or belief, however. Its responses are generated based on patterns learned from training data.

AI systems like ChatGPT or any large language model (LLM) are reflections of humanity's collective knowledge in a single interface. They reorganize existing content from the internet, but do not "think", are not "intelligent" in the human sense, have no "general intelligence" as general problem solvers and are not "conscious” of what they find.

How generative AI works: What are tokens?

The way these models operate is based on the concept of tokens, which are discrete units of language ranging from individual characters to whole words. These models process a specific number of tokens at a time using complex mathematical calculations to predict the most likely next token in a sequence.

Models like generative pre-trained transformers (GPTs) generate text one token at a time. After producing each token, the model reviews the entire sequence it has generated so far and processes it again to generate the next token. This recursive process continues until the final token completes the generated text.

This means that the quality of the AI's response depends on the prompt or instruction that a user provides. In other words, the way we interact with and instruct AI significantly influences the quality of the answers it produces.

What is prompt engineering?

Prompt engineering refers to the practice of designing and crafting effective prompts or instructions for AI models to achieve desired outputs. In the context of language models like GPT-3, prompt engineering involves formulating input text that leads the model to generate accurate, relevant and contextually appropriate responses.

Effective prompt engineering is crucial because language models like GPT-3 don't possess true understanding or common sense reasoning. They generate responses based on patterns learned from training data. Crafting well-designed prompts can help guide the model to produce more accurate and meaningful outputs, while poorly formulated prompts might lead to incorrect or nonsensical results.

What is prompt design?

Prompt design is the systematic crafting of well-suited instructions for an LLM like ChatGPT, with the aim of achieving a specific and well-defined objective. This practice combines both artistic and scientific elements and includes:

  • Understanding the LLM: Different LLMs respond differently to the same prompt. Additionally, certain language models might have distinct keywords or cues that trigger specific interpretations in their responses
  • Domain expertise: Proficiency in the relevant field is crucial while formulating prompts. For example, creating a prompt to deduce a medical diagnosis requires medical knowledge
  • Iterative process and evaluating quality: Devising the perfect prompt often involves trial and refinement. Having a method to assess the quality of the generated output that goes beyond subjective judgment is essential

Prompt size limitations

Recognizing the significance of an LLM's size constraint is crucial, because it directly influences the quantity and nature of information we can provide. Language models aren't designed to handle an infinite amount of data all at once. Instead, there's an inherent restriction on the size of the prompt you can construct and input. This limitation has profound implications for how you formulate and utilize prompts effectively.

An LLM has a maximum token capacity encompassing both the prompt and the ensuing response. Consequently, longer prompts could curtail the length of the generated response. It's important to craft prompts that are concise, but that convey the necessary information.

In practical scenarios, you must adopt the role of an editor, carefully selecting pertinent details for a task. This process mirrors the way you approach writing a paper or article within a specific word or page limit. In such cases, you can't simply dump random facts. Instead, you must thoughtfully choose and organize information that's directly relevant to the subject matter.

Prompt design is a human skill that helps ensure accurate and well-structured content. Tools can help a writer be more productive, but they're no substitute for a writer. Gen AI requires the assistance of a knowledgeable and experienced writer, researcher and editor. To be a good prompt designer, you need the same skills that make a good content writer.

Techniques for prompt design and prompt engineering

What prompt you decide to use in which situation is up to you. For instance, if you’re writing a long article, chain-of-thought prompting is useful for generating information one section at a time. The following is not an exhaustive list of all prompt design strategies, but are some of the top patterns in use today.

Persona Pattern

The Persona Pattern is a technique that directs the AI to operate from a specific standpoint, like adopting a professional role or embodying a fictional character. This approach empowers the model to generate outputs aligned with the responsibilities associated with that particular persona. For instance, you might instruct the AI to take on the role of a cybersecurity expert while evaluating code. This strategy becomes particularly valuable when you're seeking guidance, but lack precise specifications for the desired output. For example:

User: Imagine you're a senior historian specializing in the Peloponnesian War. Using that perspective, explain the crucial events and factors that led to the outbreak of the war.

In this scenario, it's anticipated that the AI will provide a comprehensive examination of the Peloponnesian War with a concentration on its causes, as you'd expect from a historian's viewpoint. Note the elevated, academic writing style and vocabulary used in the prompt. The response will match that style. In other words, your writing style in the prompt determines the style of the generated content. This is true for all prompting approaches, so tailor your prompt writing for the context in which you’re publishing.

From a marketer’s perspective, you can use the Persona Pattern to get into the mind of your customers. Without revealing any confidential information, you can ask AI to assume the persona of your target audience, then ask it questions. The AI will respond from a theoretical customer's perspective. Of course, this isn’t foolproof and is in no way a replacement for good user research and marketing data. However, it can help you write different forms of copy and generate ideas.

New information pattern

AI has limitations and one very significant one is that it's not aware of information beyond the point where its training data ends. AI can't include recent events or consider updates that have happened after its training. However, you can incorporate new information into the model's knowledge by structuring your prompts effectively. For instance, consider the following interaction:

User: Can you explain what phenomenal consciousness is?

AI: Phenomenal consciousness refers to the subjective, first-person experience of what it's like to be aware of something. In other words, it's the "raw feel" of sensations and experiences that we have, such as the taste of chocolate, the color red, the feeling of happiness or the sensation of pain.

By providing a little more context in the prompt, you can create a more relatable explanation:

User: Describe the concept of phenomenal consciousness as it relates to the debate about whether computers can ever be conscious.

AI: Phenomenal consciousness plays a pivotal role in the ongoing debate about whether computers can achieve true consciousness. This debate revolves around the idea that while computers can simulate certain aspects of intelligence, they might fall short when it comes to experiencing consciousness in the same way humans do.

Computers can simulate various cognitive functions and even perform tasks that appear intelligent. However, the question is whether this simulation equates to genuine conscious experience. Skeptics argue that even if a computer behaves in ways that seem conscious, it might lack the inner, qualitative experiences that humans have.

Refining questions pattern

AI possesses the capability to assist in improving the quality of your inquiries. For instance, if you pose a question, AI can propose a precisely formulated question that would likely yield more accurate results.

This ability is encapsulated in the refining questions pattern. It proves especially valuable when you lack expertise in a particular field or aren't certain about the best way to phrase your question. Consider this sample prompt:

User: Whenever I inquire about data science, suggest a question that's more focused on the specifics of statistical analysis. Also, ask if I'd like to proceed with the refined question.

In this scenario, the AI is expected to enhance your general data science queries by narrowing them down to questions that specifically consider the aspects of statistical analysis. The AI will then inquire whether the user wishes to proceed with the refined question.

Cognitive verifier pattern

The cognitive verifier pattern involves a method of deconstructing intricate questions into smaller, more manageable sub-questions. The answers to these smaller components are subsequently integrated to generate a comprehensive response to the main question. This approach becomes particularly valuable when the initial questions are overly broad or when additional clarity is required to deliver a thorough answer. Consider this example:

User: In cases where I pose a question related to the search inference framework of problem solving, please break it down into three smaller questions that would enhance the accuracy of your response. Once the answers to these sub-questions are obtained, combine them to present the final response.

In this scenario, the AI dissects the complex question concerning the search inference framework into three sub-questions. Once it has gathered answers for each sub-questions, it amalgamates them to produce a comprehensive and accurate answer.

This pattern entails the generation of multiple inquiries, leading to a more precise ultimate response. The process involves both verifying information and consolidating it to ensure a robust and thorough output. It's similar to the whole-part-whole method of learning and instruction in education strategy.

Chain-of-thought prompting

In a recent research paper titled Chain-of-Thought Prompting Elicits Reasoning in Large Language Models, Jason Wei and his team aimed to mimic reasoning abilities within an LLM. Departing from the conventional practice of using direct prompts, the team employs a series of interconnected prompts that form what they call a chain-of-thought.

For instance, consider a user who wants to explore the potential impact of artificial intelligence on job markets:

User: Could you provide a brief explanation of artificial intelligence?

AI: Artificial Intelligence, commonly referred to as AI, involves machines imitating intelligent human behaviors. It's the design of machines to think and learn akin to humans, allowing them to carry out tasks such as translating languages, making decisions and solving problems.

User: How is the current job market being influenced by AI?

AI: AI is both shaping new jobs and altering the nature of existing ones. It automates routine tasks, leading to the displacement of certain jobs in particular sectors. Conversely, it's generating demand for roles that were previously nonexistent, like AI ethics officers, data analysts and machine learning engineers. Moreover, AI can enhance human roles, enabling individuals to complete tasks with greater efficiency and accuracy.

The approach of chain-of-thought prompting entails a sequence of connected prompts that establish a "conversation" with the language model. This approach guides the model toward more precise and contextually fitting responses, fostering a progression of ideas and reasoning. It can also train the AI to mimic your specific writing style, so long as your prompts are written in the same tone, style, attitude and construction as your ideal answer.

Chain-of-thought is especially effective for article writing. If you have an article, an essay or an argument already outlined, you can structure your chain of prompts along that flow. Because an AI has a limit to how many characters it can give you, chain-of-thought can help you prompt information for one section of your piece at a time, rather than trying to get the AI to generate an entire article in one go. This allows you to have more control over the information and get more accurate, detailed responses. You control the structure, while AI generates chunks of the whole piece, one prompt at a time, under your guidance.

Research assistant pattern

In this prompt, you request assistance in finding sources for a research project. AI responds by recommending three relevant research articles. It provides the titles, authors and publication details of the articles, along with brief descriptions of their content. The AI also emphasizes the importance of cross-referencing and evaluating the credibility of the sources. The prompt design effectively leverages AI's capabilities to assist in research by suggesting reputable sources for further study. For instance:

User: I'm working on a research project about the effects of climate change on coastal ecosystems. Can you help me find relevant sources for my study?

AI: Of course! To assist your research on the effects of climate change on coastal ecosystems, I recommend starting with the following sources:

**Smith, J. R., et al. (2020). 'Impacts of Sea-Level Rise and Climate Change on Coastal Ecosystems: A Review.' Journal of Coastal Research.'

**Garcia, S., et al. (2019). 'Climate Change-Induced Shifts in Coastal Ecosystems: A Global Assessment.' Environmental Science and Policy.'

**Johnson, L., et al. (2018). 'Vulnerability of Coastal Ecosystems to Climate Change: Implications for Adaptation Planning.' Ecology and Society.'

These articles provide valuable insights into the impacts of climate change on coastal ecosystems. Remember to cross-reference and evaluate the credibility of these sources for your study. If you need more sources or specific information, feel free to ask!

Crucially, this prompt helps you avoid one of the major weaknesses of AI and LLMs: misinformation. By asking it for sources instead of content, AI can dramatically reduce the time required to research for your project, but you still evaluate the sources and pull what information from them you need.

Citation generator prompt enhancement

Similarly, when you ask AI to write content, you can request that it cite its sources and provide a references section. Declare which style of citation you prefer and the interface returns that format. For instance, consider this abbreviated example:

User: Explain the concept of quantum entanglement. In your response, include in-text parenthetical APA citations, as well as an APA references section.

AI: Quantum entanglement is a fundamental phenomenon in quantum mechanics where two or more particles become interconnected in such a way that their properties, like spin or polarization, become correlated regardless of the distance between them (Griffiths, 2018). This connection persists even when the particles are separated by vast distances, suggesting an instantaneous influence between them, which challenges our classical understanding of causality (Einstein, Podolsky, & Rosen, 1935).

[...]

References:

Einstein, A., Podolsky, B., & Rosen, N. (1935). Can Quantum‐Mechanical Description of Physical Reality Be Considered Complete? Physical Review, 47(10), 777-780.

Griffiths, D. J. (2018). Introduction to Quantum Mechanics (3rd ed.). Cambridge University Press.

As always, you’ll want to act as editor and check the accuracy of the information. Examine each of the sources the AI provided and ensure that the claims that AI makes about a source's arguments are true. Additionally, verify that the references section itself is correct APA formatting.

Depending on the field you’re researching, you’ll likely be able to quickly recognize some of the big names in the references section, which is a shorthand way to check for accuracy. For instance, Alain Aspect was awarded the 2022 Nobel Prize in Physics for his decades-long work on quantum entanglement, so when his name tops the references list from a prompt about quantum entanglement is a promising sign that the source that the AI used is reliable.

This type of prompt is another example of why human writers, editors and researchers with expertise in a given subject matter are crucial to the content writing process. AI can speed up the process, but it's no substitute for the human writer.

Asking AI to cite its sources is especially powerful when paired with chain-of-thought prompting. To combine the two, follow the prompting strategy for chain-of-thought, as explained above. Before submitting each individual prompt in the chain, ask the AI to cite its sources in the format you require. Then submit the request.

Few-shot pattern

Few-shot prompting is a potent method used to train AI models to adhere to specific patterns or execute tasks. The fundamental idea involves providing the model with a handful of examples before posing the actual question. This practice assists in defining the desired behavior and guiding the model's subsequent responses.

Imagine a situation where you want to craft a succinct and captivating marketing message for a new tech gadget:

User: Here are a few instances of tech marketing messages: "Experience music like never before with our state-of-the-art wireless headphones." "Capture your world in stunning 4K with our incredibly sleek action camera." Using these examples, create a marketing message for our innovative AI-powered smartwatch.

AI: Enhance your lifestyle with our AI-powered smartwatch, your partner for wellness, connectivity and efficiency.

The remarkable aspect of few-shot learning is its reliance on demonstrations rather than detailed procedural instructions. This feature makes it adaptable for various applications. This technique goes beyond basic input-output matching. It enables the model to learn how to handle intermediary steps to achieve a particular goal. This method is ideal for helping marketing writers develop iterations of short-form copy, such as headlines, short body copy, subject lines, social media ads, calls-to-action (CTA) and so on.

Now that you know these especially helpful prompt patterns, give them a try in the LLM of your choosing. I used ChatGPT in my examples.

Artificial intelligence is powering new discoveries and experiences across fields and industries. If you’re ready to learn more, explore how to use Red Hat platforms to build, deploy, monitor and use AI models and applications, accelerated by the speed, trust and transparency of the open source community.

Learn about AI from Red Hat


執筆者紹介

Michael Santos has been with Red Hat since 2022. He works on learning paths, product trials and other technical learning content.

Read full bio
UI_Icon-Red_Hat-Close-A-Black-RGB

チャンネル別に見る

automation icon

自動化

テクノロジー、チームおよび環境に関する IT 自動化の最新情報

AI icon

AI (人工知能)

お客様が AI ワークロードをどこでも自由に実行することを可能にするプラットフォームについてのアップデート

open hybrid cloud icon

オープン・ハイブリッドクラウド

ハイブリッドクラウドで柔軟に未来を築く方法をご確認ください。

security icon

セキュリティ

環境やテクノロジー全体に及ぶリスクを軽減する方法に関する最新情報

edge icon

エッジコンピューティング

エッジでの運用を単純化するプラットフォームのアップデート

Infrastructure icon

インフラストラクチャ

世界有数のエンタープライズ向け Linux プラットフォームの最新情報

application development icon

アプリケーション

アプリケーションの最も困難な課題に対する Red Hat ソリューションの詳細

Original series icon

オリジナル番組

エンタープライズ向けテクノロジーのメーカーやリーダーによるストーリー