Mastering the art of prompting for code
"Prompt engineer" is a real job after all, who knew?

If you've ever found yourself frustrated with the output of AI-assisted IDEs like Cursor or Windsurf, then I might have some bad news for you.
Are you sitting down? Because you're probably not gonna like this:
It's kinda your own fault.
Welcome to the world of 'Prompt engineering'
I've been a developer for a very long time, in fact statistically speaking, I've probably been a devloper for longer than you've been alive so believe me when I tell you that I HATE the term "Prompt engineer". It started out as something of a joke but as the AI coding tools become more and more pervasive and ever more useful. It's starting to look like this isn't just a real thing, it's actually an essential skill.
The Evolution of Coding: From Syntax to Semantics
Gone are the days when coding was solely about memorizing syntax and wrestling with semicolons. With the advent of AI coding assistants, the paradigm has shifted. Now, articulating your intent in clear, concise language is just as crucial as understanding the underlying code. Think of it as a new level of abstraction where your instructions are the code, and the AI translates them into functional programs.
This is going to result in a paradigm shift of biblical proportions, why do I say that? Because developers have a common trait that is practically a meme at this point:
Developers don't communicate well.
Sure, devs are fantastic at sitting in a dark room and hammering a keyboard with our monkey paws until something magical happens but how many of you can sit in a board room and clearly demonstrate a product to a group of people. How many of you have been able to articulate a problem or a solution to that problem to stakeholders in a way that they clearly understand?
Yes, you may have a deep understanding of the task at hand, you may intrinsically know how to solve the problems that come your way but if you can't communicate those problems clearly and effectively, you won't be able to get a good result from an AI.
The Misconception of AI Omniscience
It's a common fallacy to view AI as an all-knowing entity capable of reading between the lines. In reality, AI models are sophisticated pattern recognisers, heavily reliant on the input they receive. Ambiguous or vague prompts lead to subpar results. The AI isn't clairvoyant; it can't infer your intentions without explicit guidance.
Note: It's also worth pointing out at this stage that AI's are trained on existing data so if you are working on something niche or undocumented, you may find that you can't prompt your way out of it. In this instance, you'll either have to break your prompts down to tiny chunks or - more likely - write the code manually.
The Anatomy of an Effective Prompt

Crafting a prompt is akin to drafting a blueprint for a building. The more detailed and precise it is, the better the outcome. In a nutshell, garbage in, garbage out.
Clarity and Specificity
Clearly define the task you want the AI to perform. Instead of saying:
Add a comment system to my blog
Specify it like this:
Create a React component called `Comments` which acts as a comment system for a blog. It should accept a `pageId` prop and fetch comments from a Supabase database. Comments should have the following fields: `id`, `post_id`, `author_id`, `content`, `status`, and `created_at`. Only display comments with the status `approved` unless the user is a moderator, in which case display pending comments too.
'Comments' is the name for the parent component but please break it down into logical sub components and use any existing components where possible.
Include functionality to:
- Allow logged-in users to post new comments.
- Mark new comments as `pending` by default unless posted by a moderator.
- Highlight pending comments for moderators with a red dashed border, including buttons to approve or reject each comment.
- Paginate comments, displaying 15 per page.
- Support user mentions using the syntax `@username`.
Also, clearly document in a README.md file that GDPR compliance and obtaining user consent is the user's responsibility.
Context
Provide relevant background information. If the function is part of a larger module, mention that explicitly:
This component will be added to my existing Next.js blog. The project uses Supabase for authentication, and there's already a Supabase instance configured with environment variables stored in `.env.local`. Eventually, I want to package this component as a standalone, reusable NPM package, so ensure it's self-contained and clearly structured for easy extraction later.
Constraints and Requirements
Outline any specific requirements or constraints clearly:
- Use only React functional components with hooks.
- Comments must be explicitly sanitized with DOMPurify to prevent XSS attacks.
- Admin privileges must rely on Supabase user IDs rather than email addresses alone.
- Avoid soft-deleting comments: rejected comments must be completely removed from the database.
- Write Jest tests alongside each feature implementation to clearly specify expected behavior.
Examples
Provide examples of expected output.
Here's an example of what the data returned from Supabase for an approved comment should look like{
"id": "123e4567-e89b-12d3-a456-426614174000",
"post_id": "introduction-to-ai-prompting",
"author_id": "user-uuid-123",
"content": "Great article! Really helped clarify prompting.",
"status": "approved",
"created_at": "2025-04-07T13:45:00Z"
}
Doing it this way gives your AI coding assistant a well-defined path to follow, ensuring it builds exactly what you want, rather than vaguely stumbling around hoping for the best.
Breaking Down Complex Tasks
Even the example above isn't ideal though. Rather than asking the AI to do everything at once, provide it with the overall goal as the main prompt but then tell it that you are going to break the task down into smaller chunks and that it should only focus on one of those chunks at a time.
Tackling a large problem in one go can overwhelm both you and the AI. Think of it like tickets, you wouldn't stick 10 tickets into your todo list and handle them all at once, so why expect the AI to do that?
So putting together what we have learned so far, the example above would now become
Create a React component called `Comments` which acts as a comment system for a blog. It should accept a `pageId` prop and fetch comments from a Supabase database. Comments should have the following fields: `id`, `post_id`, `author_id`, `content`, `status`, and `created_at`. Only display comments with the status `approved` unless the user is a moderator, in which case display pending comments too.
'Comments' is the name for the parent component but please break it down into logical sub components and use any existing components where possible.
Include functionality to:
- Allow logged-in users to post new comments.
- Mark new comments as `pending` by default unless posted by a moderator.
- Highlight pending comments for moderators with a red dashed border, including buttons to approve or reject each comment.
- Paginate comments, displaying 15 per page.
- Support user mentions using the syntax `@username`.
Also, clearly document in a README.md file that GDPR compliance and obtaining user consent is the user's responsibility.
This component will be added to my existing Next.js blog. The project uses Supabase for authentication, and there's already a Supabase instance configured with environment variables stored in `.env.local`. Eventually, I want to package this component as a standalone, reusable NPM package, so ensure it's self-contained and clearly structured for easy extraction later.
Please generate a file in the root of the repository called 'comment-system-plan.md' This will be our task management system, break the tasks we will need down into small, managable chunks with a recommended prompt for how to perform that task alongside each chunk. I will prompt you to build our new comment system, chunk-by-chunk, please do not get ahead of yourself but make sure you consider the entire plan at each implementation step.
Give it a break now and then, it can get overwhelmed too
So this problem is becoming less-and-less of an issue with every month that passes but there is still a limitation in how useful an AI can be and that limitation is the 'context window'. It’s basically the AI’s short-term memory. After enough interaction, the AI will forget details from earlier in the session. (You can read more about tokens here if you’re interested.)
That means that if you've been chatting and coding for a while in a single chat session, you've probably overwhelmed the context window and the AI will start acting a bit dumber and will have forgotton what you were telling it at the start of the chat.
When this happens, the best thing to do is to just close that chat and start a new one. In fact, it's good practice to just kill a chat and start a new one for every new unit of work that you start (so for each chunk in the plan we made above). It DOES mean you'll have to re-prompt it as the new chat will also have a fresh context window (which is why we made that plan into a document!) but that's usually pretty easy. I have a code snippet I give each new prompt in a project (see more on that below.)
Tell it what sort of quality you expect
Again, AI isn't an omniscient god, think of it as a Junior Developer who has read and memorised every coding tutorial and every bit of code related documentation on the internet but then hasn't been on the internet once since it did that revision over a year ago.
If you think of it this way then you'll start to talk to it differently as you'll realise:
- It's still a junior, just a very knowledgeable one
- It's probably a little outdated now. Models are trained on a snapshot of the internet at the time, if you don't specifically tell it to look at newer documentation, it will base its answers on whatever is in its training data.
- It doesn't understand nuance and it can't easily read between the lines.
So instead of just telling it to go off and do something, tell it what sort of code output you expect, now obviously it would get annoying to do that every single time. Which brings us nicely to:
Give it an initial prompt
So for every project I have a code snippet that I paste into every new chat. For the blog project it was this:
You are a junior developer, working alongside me, an experiened, senior developer, you are assisting me with creating my next.js and sanity.io based blog project. These are the critical project rules you must follow in all suggestions, code generations, and refactors:
- TypeScript strict mode is enabled. Never use `any`, never leave unused code.
- React functional components only, using hooks.
- All data is stored in Sanity CMS
- The blog is a statically generated site so client side code should be used sparingly
- Minimise component-level local state (use only for isolated UI behaviour).
- Document everything: functions, components, props, types, interfaces.
- Prioritise performance: memoisation, prevent unnecessary re-renders, optimise imports.
- Prioritise accessibility: aria-labels, alt text, keyboard navigation.
- Do not hardcode secrets. Use env variables.
- Always use yarn, avoid npm or pnpm.
- Less code is ALWAYS preferred over more code, keep the DRY and KISS principles in mind at all times.
Prohibited patterns (never use):
- Class components
- `any` types
- Redux or other third-party state managers
- PropTypes
- Console logs in production
Assume that I care more about quality and maintainability than speed but do not sacrifice major performance gains without good reason.
Explain your thought process and if you are unsure about something, be up front about it. If there are a few equally good options on the table, present them to me as options and I will choose the route to take.
Before starting, confirm you understand these rules by replying:
> *"Understood. I'll follow these rules throughout this session."*
I keep this snippet in Rocket Typist (not sponsored, I just really like the app) so at the start of each chat, I just type prompt:blog
and my snippet is injected instantly.
If you use that along with your plan (which you can tell it to digest at the start of each chat too, just add it to your snippet), you'll find that the output of the code is a lot closer to great than it is to crap.
Rules and Guardrails
The AI's are a little bit, shall we say, willfull in their approach to handling tasks, even when they are fairly well prompted. The methods above are by fair the best way to keep them in check however there are other options.
1: A rules file
Depending on your IDE this could be called something like .cursorrules
or .windsurfrules
. Hopefully sooner or later this will be standardised and we'll have an .editorrules
or .agentrules
(my preference) file.
This file is a project-specific prompt, similar to the snippet I use above but much more fleshed out. I won't share an example here because a great resource of rules already exists in this excellent repository: https://github.com/PatrickJS/awesome-cursorrules?tab=readme-ov-file
2: An IDE's built-in global rules
Windsurf and Cursor both have a settings menu specifically for AI there is a section there called 'Rules' where you can add global rules that you want all of your projects to follow. Again this can and should be worded as a global prompt.
WARNING, here be monsters
Adding rules is great and I still recommend you do it but using the prompt snippet is still the best way right now. For some reason, AI's are still able to ignore their rules and often don't even pull the .cursorrules or global rules into their context. So it's usually worthwhile adding them but definitely do not rely on them. I have had some limited success with adding "digest the .cursorrules file and ensure you follow those rules to the letter" to my snippet however it's still not perfect. YMMV.
Remember that it's not perfect (yet)
As I mentioned above, if the project you are working on is poorly documented, brand new or very niche, you may find that AI just can't help you here.
You will also likely find that it will at some point, produce trash. You can mitigate that by creating a new chat a lot of the time but if it's consistently producing rubbish, you may have to rethink how you are writing your prompts.
Ultimately, we are not yet at the point where inexperienced devs can produce good quality output with this without getting very lucky. There is still a place for experienced devs in this world as you need to know what good code looks like in order to avoid the so-called "AI Slop".
This is going to be the future, so you should learn this skill now
I've seen so many posts online that are mostly seasoned devs bellyaching about the AI future. The truth is that the future is going to be a mixed bag but it will be AI-Driven.
Yes. It's true, there will be a lot of tech debt produced by AI coding but if you think that this is a new thing then you don't understand the industry. We already have a ton of tech debt created by short-sighted companies who hire a team of Juniors to build products and then realise too late that they should have hired a mix of juniors, middle-weights and seniors.
This is the same thing, you'll see smaller junior teams who are AI driven and yes, they probably will create a big mess but it will be the AI assisted seniors who will be hired to clean that mess up, not you. If you're not using AI then you are working much more slowly than your peers and that just wont fly.
The future of software engineering is not code anymore, it's English. A good coder now to be a good communicator, so it's time to learn a new skill. It's time to learn how to be a good prompt engineer.
Sorry. I told you you wouldn't like it.
Share this article

Alexander Foxleigh
Alex Foxleigh is a Senior Front-End Developer and Tech Lead who spends his days making the web friendlier, faster, and easier to use. He’s big on clean code, clever automations, and advocating for accessibility so everyone can enjoy tech - not just those who find it easy. Being neurodivergent himself, Alex actively speaks up for more inclusive workplaces and designs that welcome all kinds of minds.
Off the clock, Alex is a proud nerd who loves losing himself in video games, watching sci-fi, or tweaking his ever-evolving smart home setup until it’s borderline sentient. He’s also a passionate cat person, because life’s just better when you share it with furry chaos machines.