Published on

Prompt Engineering 101: Talking to the Machines So They Actually Listen

Authors
  • Name
    Dexter Mehta
    Twitter

What is prompt engineering?

Large-language models (LLMs) such as Gemini or GPT are basically giant autocomplete engines: you give them text (the prompt), they predict the next tokens. Prompt engineering is the art of crafting that input so the model predicts something useful instead of nonsense. Google’s September 2024 white-paper sums it up: “Prompt engineering is the process of designing high-quality prompts that guide LLMs to produce accurate outputs… It’s an iterative process.”

For beginners like us, think of it as talking to a super-literal intern: the clearer the instruction, the better the deliverable.


Why should we care?

  • No extra infra – You can squeeze better results out of the same model just by rewriting words.
  • Portable skill – Techniques transfer across APIs (Gemini, GPT, Claude, open-source Gemma).
  • Career booster – AI assistants are becoming bread-and-butter dev tools; knowing how to steer them is like knowing Git.

Core techniques (in human English)

TechniqueElevator pitchWhen I use it
Zero-shotJust give instructions: “Summarise this.”Fast prototype or trivial tasks.
One/Few-shotAdd 1–5 examples the model can copy.Classification, data extraction.
System / Role / ContextPrepend extra lines that set what, who, where.Tone control (“You are a sarcastic QA”), or forcing JSON output.
Chain of Thought (CoT)Tell the model to “Think step-by-step.”Math, reasoning, coding explanations.
ReAct / AgentsLet the model reason and call tools (search, code, DB).Complex tasks that need external info.

Google’s guide walks through each of these with code snippets and sample prompts.


Five habits that instantly level-up your prompts

  1. Show, don’t tell Instead of “Rewrite politely,” paste one polite example and say “Follow this style.” Models mimic patterns better than abstract rules.
  2. Positive instructions > long constraint lists White-paper advice: give clear “do” instructions rather than endless “don’t” bans—it reduces confusion.
  3. Specify output formats Ask for valid JSON, Markdown, or CSV when you need structure. Forces the model to stay on rails and tames hallucinations.
  4. Tweak the sampler knobs Temperature ≈ creativity, Top-K/P ≈ how wide it looks. Google suggests starting at T = 0.2, top-P = 0.95, top-K = 30 for balanced responses.
  5. Document every iteration Keep a sheet: prompt, model, temp, examples, score. When the API updates, you’ll know which prompt broke and why. Future-you will thank present-you.

Mini-workout: getting better fast

  1. Daily micro-prompts – Pick one boring task (rename files, write commit message) and build a prompt that does it.
  2. Prompt‐swap with friends – Trade and critique. Different brains discover different hacks.
  3. Reverse-engineer docs – Copy an example from Google’s paper, strip pieces, observe how output degrades.
  4. Add a test harness – In code, wrap your prompt in unit tests with assert-contains. You’ll spot regressions when you fiddle.
  5. Read the failures – Half the learning is reading the model’s wrong answer and figuring out which words mis-led it.

Closing thoughts

Prompt engineering feels like magic, but beneath the sparkle it’s just clear writing, experimentation, and a dash of probability tuning. Start small, log everything, and iterate. Before long you’ll catch yourself treating the LLM like a pair-programmer who actually gets you—and that’s when the real productivity boost kicks in.

Happy prompting! 🚀