What the best AI users know about prompting (that everyone else misses)
Don't be a prompt engineer. Just be someone who thinks strategically about how they use AI.
There are a lot of myths about AI that have been injected into the proverbial universe. One of those is that using it well requires some elite understanding of "prompt engineering". With the insinuation being that getting good outputs is about memorizing secret syntax or structures or frameworks or rules.
But the reality? AI doesn’t reward you for perfect inputs, because perfect inputs are a fallacy. It rewards us for a thoughtful understanding of our goals (or challenges) and taking an intentional approach to:
Intent: What are we trying to get done?
Context: What information is needed to get that done?
Clarity: What’s the best way to communicate the first two?
Key Takeaways
The best AI users are the best AI users because they frame problems effectively, iterate with intention, and know how to be a guide
AI is a tool for collaboration, not just content generation; treating it like a conversation leads to sharper, more relevant outputs
Master AI not by memorizing tricks, but by understanding how and when to guide it to better results
In that sense, getting good with AI has less to do with engineering and more to do with thinking. The best users are the ones who know how to frame problems, adjust their approach, and strategically interact.
Let’s talk about what that actually looks like, including:
Why the term “prompt engineering” is both daunting (and inaccurate) for the everyday AI user
What the best AI users do when working with AI
How to close the gap between your current skills and the best of the best
Why we need to rethink prompting
There’s a reason the term "prompt engineering" sounds intimidating. It makes AI seem:
Rigid, like coding, where a misplaced character can break the whole damn thing
Strict—as if there’s no room for nuance, ambiguity, and conversation
Gamy, like there’s a perfect way to beat the level
But think of it like improv comedy versus a sitcom script. A sitcom has precise, structured dialogue and little room for deviation. Improv, on the other hand, relies on adaptation and thrives on the response to what’s happening in the moment.
The real skill lies in framing problems clearly and guiding AI towards useful responses. Fly like a butterfly and all that.
What strategic AI users do differently
The best users don’t waste time obsessing over perfect prompts. They focus on strategy.
They think in terms of problems, not commands
Before typing anything, skilled AI users will ask: What problem am I actually trying to solve? Instead of prompting AI to “write a blog post about productivity,” they think about the process and the steps to get where they want to go. Am I looking for an outline? A draft? A unique angle and tagline?
This gives AI something only we can provide: direction. If you don’t define the problem and have a concept of the goal to be achieved, it’s a best-guess game. And AI sucks at that game.
Additionally, when we think in terms of problems, it helps to clarify how we shape responses through iteration. The best users interact dynamically, refining and guiding. With a firm grip of the problem, it becomes easier to do things like:
Take AI’s draft and make it punchier (or more humorous or relatable, etc.)
Push back on weak arguments or fluffy ideas
Call out and eliminate inaccuracies or hallucinations
All of which plays an instructional role in removing ourselves from the fallacy that AI is a one-and-done machine.
They chain different tools together
No single AI tool does everything well. Smart users combine tools for compounding results. They even combine AI and non-AI tools into workflows. Here are a few I use:
Let’s say I’m doing deep research on a topic either because (A) I have a knowledge gap or (B) I’m looking to save time on developing an overview of knowledge to share. My workflow would chain together:
ChatGPT → To metaprompt, and speed up the time to write a polished prompt
Perplexity → Where I’d use my prompt to conduct web research
If I’m doing image generation, which I often do for creating marketing collateral, I typically chain together:
ChatGPT → To help me quickly write a good prompt (yes, I do this a lot)
Google Gemini (Imagen) → To generate (and export) my image1
Adobe Express or Canva → Where I manipulate the image and layer on elements
Or let’s say I’m looking to generate some code in the context of an existing project. There are numerous ways to skin this cat (don’t ask me about vibecoding):
ChatGPT → Again, to help me write a thorough prompt for code generation
Claude → To run my prompt, generate the code, and generate story reqs
JIRA → To add in my reqs
Cursor → To paste in and troubleshoot (sometimes I forego Claude and just use Cursor)
And that’s just a few workflows.
Yes, this means we’re operating in more than one place (but the reality of software is we’re always operating in multiple places). More importantly, this serves us in a couple ways. Thinking more along the lines of a toolkit means we get the best of multiple worlds. It can also mean we’re being more effective and kind to our respective budgets.
They think beyond just text generation
AI tools can help us analyze data, extract insights from PDFs, generate interactive web pages, and do a lot of research in a little time. Remember the four buckets of practical AI for individuals?
When we look beyond generating text, we can start a lot earlier in the process of any given task.
We can also fill in (or shrink) the cracks between the meaningful work. This can be as simple as using an in-app tool like Gemini to summarize emails. Or more complex like transforming content in one format (a table) into a different format (a bulleted) or vice versa. It can also be things like getting up to speed (with speed) when switching gears or being quicker to research how to use new features in the tools available to us.
The more we wipe out or condense tedious tasks, the more strategy heavy we can make our workloads. This is where the best AI users strike balance. Because it’s in that creative and over-arching and high-context realm where AI doesn’t hold a candle to what humans can do.
They experiment
Great AI users experiment. And in the world of AI, there are a lot of levels in which we can experiment. Including:
Prompts: Trying different versions of prompts, tweaking inputs, and learning from what works (and what doesn’t)
Capabilities: Playing with new capabilities like deep research or reasoning models (models that “think”), in addition to interacting in ways beyond text, like voice inputs or uploading attachments
Tools: Going beyond the “Big One” (ChatGPT) to see what other chat tools have to offer but also exploring workflow-driven tools (Bolt.new for building apps, Jasper for copywriting, Caption for videos)
In-app AI features: Nearly every major application we’re using has some sort of AI enhancements (and a lot aren’t great), but don’t knock it ‘til you try it
And an unspoken element, the best users are also good at documenting—what’s worked versus what hasn’t, which prompts get the best results. They collect the knowledge that helps them continue to turn the wheel on their own successful AI use.
They are ruthless about outputs
Skilled AI users don’t just accept whatever the model spits out—they demand precision. They know the efficiency of using AI is a factor of the constraints they place on it. Instead of vague requests like "Write a summary of this article," they specify:
Format: “Summarize this in three bullet points, each under 20 words.”
Tone: “Rewrite this to sound like an op-ed with headings and sub-headings, and make it 800-1000 words.”
Structure: “Break this into sections with H3 headings and a final takeaway.”
Specificity saves time. The less we have to tweak or rework AI outputs, the faster we move. Being strict upfront reduces the back-and-forth and makes it less painful to synthesize into whatever medium we’re placing it.
How to develop an intuitive approach to AI
Here’s how to work with AI, rather than just throwing words at it and hoping for the best.
Learn when to refine vs. when to restart
Not every AI response deserves refining. If a generated draft is almost right, a few tweaks can get it there. But when a response is completely off, it’s far more effective to scrap it and adjust the prompt. Look for things like:
Is the response lifeless, robotic or off-tone?
Was your request misunderstood?
Is the research or summary too shallow?
Just like in the world before AI, iterating endlessly wastes time. Knowing when to restart helps avoid the sunk-cost fallacy.
Make AI prove an approach won’t work
It’s easy to assume AI can’t do something when the first few tries don’t work. Instead of giving up, stay optimistic and push it further. Ask for different angles, simplify the task, or chunk it into smaller steps. For example:
Instead of: “Summarize this 10-page document.”
Try: “Summarize this document section by section, then combine the key takeaways.”
AI often fails simply because we don’t ask the right way. It’s okay to write off a task, but before writing off a task as impossible, be sure you’re giving it the ‘ol college try.
Try different workflows for the same task
There’s never just one way to accomplish a task. Test different structures (and different inputs). For example, if you’re generating a social media post, you could:
Write a great hook and ask AI to continue on
Provide an outline and have AI take a stab and flushing it out
Take a post concept and combine it with a template (ie a “proven formula” for said platform)
We often start similar tasks from different places. Locking yourself into a single approach limits creativity, problem-solving, and ultimately results.
Avoid prompting for the sake of prompting
It’s easy to fall into the trap of generating content just because AI makes it effortless. But strategic users know when to stop. Ask yourself:
Is this something I’d do if I didn’t have the luxury of AI?
Is this a priority in the grander scheme?
Does it need to be done now?
In other words, don’t do just because you can. Using AI with intention is what separates efficient users.
Treat AI like a conversation when you’re starting broad
If you don’t have a fully formed idea yet, use AI like a brainstorming partner. Instead of trying to engineer the perfect prompt up front, start with a general question and refine as you go. For example:
Instead of: “Write a blog post about productivity tips for remote workers.”
Try: “What are some overlooked challenges of remote work? How do different industries solve them?”
Think in small, high-impact use cases first
The most effective AI use happens in bite-sized chunks. Not only because it reduces the bullseye we’re trying to hit, but also because it naturally builds in the critical thinking windows. Plus, it’s a great recipe for finding smaller, repeatable tasks for AI assistance.
For example:
Write a post outline instead of drafting the full article.
Analyze and list ways you can make an impact in a job instead of writing the whole cover letter.
Develop the tone, structure, and key messaging of an outreach email rather than writing the entire thing.
Input a technical challenge and have AI suggest potential diagnostic approaches rather than concrete solutions.
Input a decision scenario and map out potential consequences and alternative paths instead of making your decision.
Analyze communication patterns and empathetic engagement strategies, not complete responses.
What if you stopped worrying about writing the perfect prompt and focused on framing problems effectively? AI isn’t about tricks, hacks, or rigid formulas. At the end of the day, small shifts in thinking are the way to reduce the time spent fighting with AI.
Master thinking strategically, and you’ll get better results than 95% of people still trying to "engineer" their way through prompts.
Posts Mentioned
Four buckets of practical AI for individuals
Most people hear “Generative AI” and take it at face value. Yes, AI spits out content—drafts blog posts, generates images, composes music, makes up statistics. But that’s an incomplete picture of what capabilities AI holds for us as users.
I’ve come to like the results of Google Gemini’s Imagen models over Dall-e. Both of which I use the free versions of.