Categories
Tech Career

Unlock Better AI Responses: 5 High Impact Prompting Techniques

TLDR

If you want better quality AI output:

  • Assign a role – tell it who it is
    ❌ “How do I do X?” → ✅ “You are a senior .NET developer. How do I do X?”
  • Clarify, don’t assume – force it to ask questions first
    ❌ “Write a migration script” → ✅ “Write a migration script. Ask clarifying questions before generating the script”
  • Ask for confidence – make it self-rate
    ❌ “What’s the optimal read/write ratio for this database?” → ✅ “What’s the optimal read/write ratio for this database? Include a confidence percentage at the end of your answer”
  • Brainstorm multiple options – explore trade-offs
    ❌ “How do I paginate?” → ✅ “Generate 3 approaches with pros/cons and a recommendation”
  • Give rich context – feed all relevant info
    ❌ “Refactor this code” → ✅ “Follow MyService.cs patterns and tests”

AI models are incredibly powerful, but they are only as good as the prompts you give them.

Through months of experimenting and research with ChatGPT, Claude, Gemini and other AI systems, I have found 5 techniques that consistently deliver the best return on time invested – think of these as the 80/20 of prompt engineering.

You can use any one of these on its own, or even combine them to get the best gains!

The difference is not subtle. These tweaks can be the difference between a vague, generic answer and something you could ship to production today.


1. Roleplay as an Expert

Tell the AI model who it is before you tell it what to do.

When you assign an expert role, the model shifts from being a passive, general-purpose assistant to a subject matter specialist. This encourages more direct advice, stronger opinions, and sometimes valuable pushback on your own ideas.

This reduces the “sure, whatever you say” pandering effect that most AI models demonstrate and increases the quality and depth of your results.

❌ Bad Prompt

“How do I do X in React?”

❌ Result

“You could use a hook or maybe just useState, it depends on what you prefer.”

✅ Improved Prompt

“You are a senior React developer with 10 years of production experience. I’m building a production feature that requires X. Recommend the most maintainable and scalable approach, and explain why.”

✅ Improved Result

“Given you are working in a production React environment, I recommend implementing this using a custom hook. This will improve code reuse and maintainability. Avoid inline useState here because it will be harder to share logic across components. I can provide a code sample if you want.”

💡Applications

  • “You are a lead backend engineer at a fintech company” → better API designs and implementations that consider scalability, compliance and constraints of the financial domain
  • “You are a cloud architect specialising in AWS” → stronger infrastructure recommendations with cost and performance trade-offs
  • “You are a DevOps engineer with Kubernetes experience” → better deployment strategies and Helm chart configurations
  • “You are an expert software architect in the Payments industry” → more nuanced and well-thought out architectures for the Payments domain
  • “You are a frontend performance expert” → targeted recommendations and brainstorming to improve frontend performance

2. Encourage Ambiguity Resolution

Tell it not to make wild guesses.

AI models are trained to fill in gaps with “reasonable” guesses. This can be useful in creative writing but dangerous in technical work.

By telling it to clarify instead of assuming, you get fewer hallucinations and better alignment with your requirements.

❌ Bad Prompt

“Write me a migration script for the users table.”

❌ Result

“Here’s the migration script” (Model has guessed column names incorrectly, breaking your database migration)

✅ Improved Prompt

“Write me a migration script for the users table. Do not make any assumptions — if any column names, constraints, or types are unclear, ask clarifying questions before producing the script.”

✅ Improved Result

“Before I proceed, can you confirm the table name and whether the user_id column is nullable? This will affect the migration script.”

💡Applications

  • Implementing API endpoints in large existing codebases
  • Generating code with missing or outdated specs
  • Building CI/CD pipelines where environment variables are unclear
  • Refactoring code without a complete test suite
  • Useful anywhere accuracy matters!

3. Ask for a Confidence Percentage

Make it tell you how sure it is.

An AI model’s tone can be misleading – it can sound 100% sure even when it is guessing. By forcing it to self-rate its confidence, you get a quantitative sense of trustworthiness.

Low confidence is a signal to verify with other sources or provide more context.

❌ Bad Prompt

“What’s the optimal read/write ratio for this database?”

❌ Result

“The answer is 42.” (No clue if that’s a solid fact or a guess)

✅ Improved Prompt

“What’s the optimal read/write ratio for this database? Include a confidence percentage at the end of your answer, and explain what factors reduce or increase your certainty.”

✅ Improved Result

“The answer is 42. Confidence: 60%. Reason: Limited recent data on this topic and conflicting sources.”

💡Applications

  • When making decisions that rely on new, incomplete, or changing data
  • When fact-checking information from multiple sources
  • When prioritising between multiple possible actions or strategies

4. Encourage Critical Thinking

Ask it to explore multiple ideas/paths/approaches and compare them.

By default, AI models optimise for speed and give you a single solution. Asking for multiple approaches forces deeper thinking, comparison, and justification.

This is especially powerful for problem-solving and brainstorming in software architecture, design, or strategy.

❌ Bad Prompt

“How should I implement pagination in my API?”

❌ Result

“You could do this with a for loop.”

✅ Improved Prompt

“Generate three different ways to implement pagination in my API, explaining pros, cons, and trade-offs for each approach. Then recommend the one that best balances performance, scalability, and ease of use.”

✅ Improved Result

“Option 1: For loop – Simple, minimal memory overhead, but harder to parallelise.
Option 2: map() function – More declarative, easier to chain transformations, but slightly slower for very large datasets.
Option 3: Streaming API – Best for huge data volumes, but more complex to implement.
I recommend Option 2 for clarity unless performance is critical.”

💡Applications

  • When debugging a complex production issue where there could be multiple possible causes
  • When brainstorming ideas on how to implement a particular feature
  • When troubleshooting deployment failures with several moving parts
  • When making architectural decisions that depend on comparing multiple approaches

5. Context, Context, Context

Give it everything it needs up front.

An AI model without context is like a developer dropped into a codebase with no documentation. The more relevant examples, constraints, and references you give, the more accurate and tailored the output will be.

Remember this – an AI model (Large Language Models, to be precise) is just a prompt-driven probability machine at the end of the day!

You can add files, directories, terminal outputs, web links, images, etc. Get creative – that list is growing everyday!

❌ Bad Prompt

“Refactor this code.”

❌ Result

“Here’s a refactor suggestion” (Completely ignores your team’s style and conventions)

✅ Improved Prompt

“Refactor this code to follow the same style and patterns as in MyService.cs, ensuring naming conventions, null checks, and test coverage match our current standards. Then write tests using MyServiceTests.cs as a reference for testing approach.”

✅ Improved Result

“Here’s the refactored method following the patterns from MyService.cs, including consistent naming and null checks as used in that file. I have also written tests following patterns from MyServiceTests.cs”

💡Applications

  • Refactoring backend services to follow an existing architectural pattern
  • Writing unit/integration tests aligned with current testing framework
  • Implementing new endpoints to match an existing API style guide
  • Generating Terraform modules consistent with existing infrastructure code
  • Creating new React components to match established UI design patterns

Bottom line

Treat AI models like junior engineers. Give them a role, set guardrails, demand clarity, ask for reasoning, and feed them rich context. Use these techniques individually or mix them together for compounding gains – they will surprise you by how effective they can be.

And of course, enjoy the sweet productivity gains! ⚡

Subscribe to my newsletter, where I share actionable advice and high-quality insights from across the internet! 🚀