A generative AI (gen AI) model, such as ChatGPT, is trained using a variety of texts from the internet. Instead of focusing on specifics, the model is trained to be general, allowing it to come up with creative answers, engage in complex conversations, and even display a sense of humor. Despite this, AI doesn't possess comprehension, understanding, or belief. Its responses are generated based on patterns learned from training data.
AI systems like ChatGPT or any large language model (LLM) are reflections of humanity's collective knowledge in a single interface. These models' intelligence comes from their success as prediction machines, which allows them to emulate human interaction in a conversational format.
How generative AI works: What are tokens?
The way gen AI models operate is based on the concept of tokens, which are discrete units of language ranging from individual characters to whole words. These models process a specific number of tokens at a time using complex mathematical calculations to predict the most likely next token in a sequence.
Models like generative pre-trained transformers (GPTs) generate text one token at a time. After producing each token, the model reviews the entire sequence it has generated so far and processes it again to generate the next token. This recursive process continues until the final token completes the generated text.
This means that the quality of the AI's response depends on the prompt or instruction that a user provides. In other words, the way we interact with and instruct AI significantly influences the quality of the answers it produces.
What is prompt engineering?
Prompt engineering refers to the practice of designing and crafting effective prompts or instructions for AI models to achieve desired outputs. In the context of language models like GPT, prompt engineering involves formulating input text that leads the model to generate accurate, relevant, and contextually appropriate responses.
Effective prompt engineering is important because language models like GPT don't possess true understanding or common sense reasoning. They generate responses based on patterns learned from training data. Crafting well-designed prompts can help guide the model to produce more accurate and meaningful outputs, while poorly formulated prompts might lead to incorrect or nonsensical results.
What is prompt design?
Prompt design is the systematic crafting of well-suited instructions for an LLM like ChatGPT, with the aim of achieving a specific and well-defined objective. This practice combines both artistic and scientific elements, including:
- Understanding the LLM: Different LLMs respond differently to the same prompt. Additionally, certain language models might have distinct keywords or cues that trigger specific interpretations in their responses.
- Domain expertise: Proficiency in the relevant field is crucial while formulating prompts. For example, creating a prompt to deduce a medical diagnosis requires medical knowledge.
- Iterative process and evaluating quality: Devising the perfect prompt often involves trial and refinement. Having a method to assess the quality of the generated output that goes beyond subjective judgment is essential.
Prompt size limitations
Recognizing the significance of an LLM's size constraint is important as well, because it directly influences the quantity and nature of information you can provide to it. Language models aren't designed to handle an infinite amount of data all at once. Instead, there's an inherent restriction on the size of the prompt you can construct and input.
An LLM has a maximum token capacity encompassing both the prompt and the ensuing response. Consequently, longer prompts could curtail the length of the generated response.
As a developer using the tool, you must craft prompts that are concise but that also convey the necessary information. In practical scenarios, you must adopt the role of an editor, carefully selecting pertinent details for a task. This process mirrors the way you approach writing a paper or article within a specific word or page limit. In such cases, you can't simply dump random facts. Instead, you must thoughtfully choose and organize information that's directly relevant to the subject matter.
Prompt design is a human skill that helps ensure accurate and well-structured content. Tools can help a developer be more productive, but they're no substitute for the human mind.
We must also note at the outset that a gen AI tool is not capable of outputting production-ready code. While the AI can save you time, you must still be responsible for applying your expertise to your work, including (and especially) anything generated by the AI.
Prompt design and prompt engineering techniques for developers
What prompt you decide to use in which situation is up to you. There is no one-size-fits all method of getting what you need out of the AI tool. As such, experimentation is important, especially within the specific context of your work. The following is not an exhaustive list of all prompt design strategies, but includes some of the top patterns in use today by developers. These can get you started, but don’t forget to iterate on them.
Creation and generation prompts
The simplest form of prompt for gen AI is, as you might expect, a request for it to generate a small piece of code. In this case, issue the directive to the AI in a short sentence. You can also structure the prompt as a question, such as if you are in search of the correct code snippet to use in a given situation.
For instance:
- "Generate a JavaScript function that returns the sum of the following array: [5, 9, 24, 48]."
- "How do I create an unordered list with a gray background in HTML and CSS?"
- "In C#, write a code snippet that tells me if a specific file exists or not."
These prompts are effective because they can address specific questions quickly and easily. Similarly, these generation prompts can save you a lot of time that would otherwise be spent searching the internet or books for particular code snippets.
Persona pattern
The goal of this prompt strategy is to instruct the AI to assume the personality of a specific type of person. It’s a favorite of content writers, who can often generate high quality content by directing the AI to play an expert in a given field. It’s also well-suited for developers, who are themselves a type of "persona" with sub-domains of expertise you can use to prompt the model.
Simply instruct the AI to act as, for example, a certain type of developer, then give it a specific output request.
For instance:
- "Adopt the role of a web developer. Generate HTML and CSS code for an accordion that has drag and drop functionality."
- "Act like a game developer. Write C# code for collecting coins in an 8-bit platformer game."
- "Play the role of a backend developer. How would you go about identifying the bug that’s causing an icon to appear correctly in the content editor but not on the live page?"
This prompt is useful because it narrows the scope of the AI’s output to your own domain of expertise. This makes it easier for you to check the output for quality. Additionally, you can even use this prompt to converse with the AI, effectively turning the interface into an ideation partner, as demonstrated in the third prompt example above.
Problem solving prompts
With this prompt strategy, you can once again leverage the conversational aspects of gen AI interfaces by directing the AI to look for solutions with you. As such, this prompt is useful for both brainstorming a project and for tackling issues that may arise during the development and/or testing phase.
Simply describe the problem and ask the AI to help you solve it. You can structure this prompt as a question or as an imperative. You can also include your code in the prompt and ask the AI to review and revise it.
For instance:
- "Review the following HTML code and find the bug that’s causing the image to cover the text: [paste relevant code]."
- "I have a memory leak problem. Can you help me identify a possible solution in C#?"
- "Come up with a solution to a thread deadlock problem in Java. Here’s my code. Where am I going wrong? [paste relevant code]."
Development patterns
Of course, you can attempt to have the gen AI develop longer-form code instead of just snippets and functions. We must reiterate that, if you opt for this strategy, your expertise will be needed to check the quality of the AI’s output. It is not recommended to treat the generated code as production-ready.
For this prompt, direct the AI to develop programs and scripts that accomplish certain tasks. There is no limit to the iterations of this prompt that you can come up with, which is why it’s both so effective and much in need of a human’s guiding hand. The results you get will likely be less consistent with this strategy than with the ones above.
For instance:
- "Develop a Java script that generates a strong password that relates to astronomy."
- "Develop a C# program for reading CSV files and creating a pie chart from the data."
- "Help me develop a program that alphabetizes and formats in APA style the citations listed in the file:
reference_section.txt
."
The rewriting prompt
There are times when you think your code works but can be cleaner and more aligned with best practices. Gen AI can help you do this.
In this prompt, direct the AI to review a code snippet and rewrite it according to best practices. Then, make sure that you review the AI’s output for quality. Additionally, you can ask the AI to provide a checklist of best practices to help you manually review your work. This prompt is a great way to make your code more efficient.
For instance:
- "See my C# code below and rewrite it according to Google style guidelines: [paste relevant code]."
- "Is my Java script in alignment with best practices? [paste relevant code]."
- "In checklist format, summarize best practices for Python functions according to the most commonly used style guide."
The instructor pattern
One underappreciated use of gen AI is as an educational tool. There will be times that you encounter code that is perhaps aligned with older best practices or practices that you don’t fully understand. In this case, you can prompt the AI to explain the workings of that code to you.
For instance:
- "Take a look at the following C# function and tell me how it works. How could I improve it? [paste relevant code]."
- "I’m editing a website that was first built in 2007. Is there a better practice today for structuring the following code? [paste relevant code]."
- "Can you explain to me how the following code is inefficient? [paste relevant code]."
Of course, you can also ask the AI higher-level, conceptual questions. Doing so provides an opportunity to combine this prompt with the problem solving approach outlined above. You can also ask questions purely for the purpose of learning.
For example:
- "My Java application keeps freezing and crashing. My team suspects it’s an excessive garbage collection problem. Can you explain what that is and suggest steps to fix it?"
- "Teach me the basic syntax of a regular expression and how to use it."
- "Explain how to write a basic React component boilerplate code."
Now that you know these especially helpful prompt patterns, give them a try in the LLM of your choosing. In fact, I used ChatGPT as the basis for my examples. Remember that you can iterate on any of these prompts. There are many ways to vary the strategies we’ve discussed so that they best fit your needs.
Artificial intelligence is powering new discoveries and experiences across fields and industries. If you’re ready to learn more, explore how to use Red Hat platforms to build, deploy, monitor, and use AI models and applications, accelerated by the speed, trust, and transparency of the open source community.