I was recently asked to teach a small session on using prompt engineering to get the most value from AI tools. These are the notes on that class. 😊


👷🏼‍♂️ What is Prompt Engineering?

By this point, most of us have used some kind of LLM chat bot, such as Deepseek, ChatGPT, or Claude, at least a few times.

(Some of us use them basically all, day every day 😅)


And most of us have also had a similar experience of sometimes getting really great output from these AI tools, and other times getting complete garbage.


There are a number of pieces that go into having LLMs generate high-quality output (including complex questions like model type, model size, temperature settings, etc.) but one of most important is definitely the quality of the prompt you give to the LLM.


Put garbage in, and you’ll get garbage out.


The art of crafting the message you’re sending the AI to maximize the quality of the output is called Prompt Engineering, and it’s key to working well with these tools.


Even lower-power, cheaper AI models (including ones that can run 100% for free, offline on your laptop) can still often generate really great output IF it is given a proportionately great prompt.


So, if you want to get more from AI tools, figuring out how best to prompt them is an important place to start!

🧩 The Lego Pieces of a Great Prompt

When I’m trying to craft a high-quality prompt, there are 5 pieces that I try to include.


In some cases, any one or two of these on their own can create a useful prompt.

However, all of them together can help the AI generate output that feels almost magical.

The 5 pieces are:

  1. Identity
  2. Define the Task
  3. Examples
  4. Context
  5. Confirmation Questions

1. 🧬 Identity

LLMs think differently from how humans think in many important ways.


But, there are a few notable ways that we think similarly.

One of those appears to be in how our perceived identity reflects our productive output.


If we believe we’re going to do a good job at something, we’re more likely to actually DO a good job at that task. If we are given a task that we don’t think we’re qualified for, and we let that belief steer the work we’re doing, we’re doing to do a poor job.


Football players hype themselves up before a game by telling themselves they’re amazing football players.

Good public speakers prep themselves for a speech by reminding themselves that they are good public speakers.


And, for some reason, LLMs are better at the tasks you give them if you tell them they are good at that task before they start it.

So, a good LLM prompt starts with giving the AI the identity you want it to have for what you’re asking it to do.

Here are a few examples I used this week:

For translation:

You are a world-class translator. 
You create top-quality, culturally-aware, 
nuanced, natural-sounding translations of 
video scripts from English into Hindi. 

For programming:

Act as a professional, full-stack web developer. 
You build applications using React and server-less infrastucture, 
by architecting software solutions using the current best practices, 
and then by writing high-quality, well-thought-through code.

Start by giving your AI a pep-talk, and they’ll do what you’re asking surprisingly well.

2. 🎯 Define the Task

What do you want the AI to do? What is your actual goal?

If you can explain in detail the “what” you’re asking of the LLM, it’s going to do a better job of understanding the task, and consequently, will give you better output.

If you aren’t clear, the AI will guess, and may do a good job. Or it may not ¯\_(ツ)_/¯.


But, to maximize quality, detailed instructions are key!

3. 📑 Examples

Another way humans and LLMs can think similarly is that we befit from Show AND Tell.

If I give you you a task, and explain the details of it to you, you might get it.

But, if I explain AND give you a few examples of what it looks like when the job is done well, you’re probably going to understand the task better, right?

Similarly, when prompting, explaining the task, and then ALSO giving a number of examples of how you want the task accomplished, will result in better output than if you do not include examples.

4. 🗺️ Context

For basically all projects and problems we find ourselves working on, there is context that plays into how we approach our work.


In programming projects, there is pre-existing code we work with.

In translation, there’s the content that you actually want to see translated.

In writing, project-specific style and key terms play into how we define high-quality output.


Providing this context to a large language model is vital to getting good output.


Now, providing too much context can also be problematic. Most of these models start to perform poorly when they’re given too much information, so you probably don’t want to give millions of words at a time to the AI, except under very specific conditions. (There are other techniques like Retrival Augmented Generation that you can use to work with massive data-sets like that instead).

However, including the specific context that is applicable for the task you’re giving the AI will result in much better quality output than if you just ask it to do a job without providing that context.

5. 🤔 Confirmation Questions

This one is gold!

It’s super simple, but in my experience, it can increase the likelihood that the LLM will succeed outputting what you want it to by maybe 2-3x.


You know how sometimes you’ll send a prompt to an LLM, and it misunderstands what you asked, and you need to ask again?

It would be nice if we could avoid that, wouldn’t it?

Well, adding just one line to the end of a prompt will decrease the likelihood of a misunderstanding like that significantly.

And this piece you can just copy and paste into any prompt.


Here it is:

Before you answer, please ask me 10 questions 
to clarify what I have asked of you.

Turns out, LLMs can be pretty good at figuring out what parts of what we’ve asked of them are ambiguous, and they can then ask us to clarify those specific ambiguities in order to clarify the task.

If you include this line at the end of your prompt, the AI will ask you 10 questions where it doesn’t fully understand what you’re asking of it.

Go through and try to give thoughtful answers to these questions, then send them.


I have found that adding this line and these steps to a prompt definitely increase the amount of time I spend on any single prompt, but I find it also increases the likelihood that the LLM will understand what I’m asking it to do even more, saving me the time of needing to try several different prompts to get the right output.

So, I’ve found it a very worthy addition!

🗨️ Couple other thoughts

This above is my general framework for approaching prompting LLMs directly, such as in a chatbot interface. Five simple pieces, but useful ones!


It is worth noting that prompting templates do look different in certain other instances, such as when prompt engineering for building AI agents (where providing objectives and tools are other vital components).


It is also worth noting that different types of models can and should be prompted different.

A high parameter reasoning model like OpenAI’s o3, or Claude’s Sonnet 3.7 thinking might not need as many examples to create high-quality work. So, you can still usually get quite high-quality output even with less prompt engineering, especially because they can spend time “reasoning” before they give you the answer. Meanwhile, a smaller, cheaper model may require more purposeful prompting, with more examples, to be able to reliably give you the output you want.