The Lazy Marketer's Guide to Not Writing Terrible AI Prompts

4 months ago 2

I've spent this week deep in the weeds automating workflows with AI and testing new models for marketing applications, and what keeps hitting me over and over again is that no matter what fancy model you're using or what complex system you're building, the ability to construct and optimize your prompts remains the core skill that determines your success. So I thought this week I’d walk you through my real-world process. This isn't theoretical — it's what I actually do when I’m trying to wrangle AI into doing my bidding:

Start with a lazy 2-sentence prompt. Watch as the output is complete garbage. Accept that you'll need to put in more effort.

Use the framework shared by OpenAI President Greg Brockman as your foundation:

  • Clear objective statement

  • Relevant background information

  • Specific instructions

  • Examples if helpful

  • Format requirements

This forms the core structure of your prompt, no matter what approach you take next.

Depending on what you're trying to accomplish, select one of these methods:

When to use it: When you need options rather than a single definitive answer. How to do it: "Generate 3-5 different approaches to [your request]."

When to use it: For complex problems requiring careful analysis. How to do it:

  1. Preparation prompt: "Break down [question] into steps, carefully considering each detail and logical connection."

  2. Results prompt: "Give me the final answer based on the above analysis."

When to use it: When fine-tuning output through conversation. How to do it:

  1. Write initial prompt

  2. Write subsequent prompts to modify the output: "Now do X," "Revise to include Y"

  3. Ask it to incorporate your feedback into the original prompt so you can save for future use

When to use it: For creating longer, structured content. How to do it:

  1. Outline prompt: "Give me an outline for [topic]"

  2. Expansion prompt: "Expand on [specific section]"

When to use it: When you need data in a specific format for code, spreadsheets, or easier understanding. How to do it:"Provide a structured analysis of [topic] in [JSON/CSV/table] format"

When to use it: For most general tasks when you want quality with minimal effort. How to do it:

  1. Ask:
    "Write the best prompt you can to answer the following question {insert your question here}. Please use the following framework:

    • Goal Statement: Write a clear, focused objective at the start. Begin with phrases like "I require" or "I need." Keep it concise but complete.

      • For example: "I need to create an email sequence for new SaaS customers that reduces churn in the first 30 days."

    • Return Format: Specify precisely how you want the information presented. List each required element:

      • List every data point needed

      • Include any specific formatting requirements

      • For example: "Please provide the subject line, send timing (days after signup), primary goal, key messaging points, and specific call-to-action for each email."

    • Warnings and Requirements:

      • Outline any critical checks or validations

      • Highlight potential pitfalls

      • Specify must-have criteria

      • Note any deal-breakers.

      • For example: "Ensure emails focus on customer success rather than upselling. Avoid generic welcome language. Each email must include one clear action for the user to take."

    • Context Details:

      • Conclude with relevant background information

      • Share your experience level

      • Note preferences

      • Mention constraints

      • Include any helpful context. For example: "Our product is a marketing automation tool. Current data shows most users abandon after their first login. Our target audience is marketing managers at mid-size companies who are typically juggling multiple tools and responsibilities."

  2. Evaluate the prompt it gives you.

  3. Copy-paste that prompt back to the AI.

I've found that using voice-to-text for the context part of my prompts works surprisingly well. When typing, I tend to self-edit and skip details because I’m too lazy to type them all out. By speaking my request instead, I naturally provide more background information and nuance.

Most commercial LLM apps offer dictation features. I simply talk through what I'm trying to accomplish, explaining it as I would to a colleague. This tends to give the AI a more complete picture of my needs without requiring me to meticulously type out lengthy context, and my results are SO. MUCH. BETTER.

When I’m getting poor results, I double check the following potential issues:

  • Vague objectives: I haven’t been clear enough about what I want, either in terms of results or with output formats.

  • Missing context: this is usually laziness on my part. A little voice chat can go a long way, or, if I’m a but uncertain myself, I have the AI start asking me questions and I find that clarifies the situation significantly.

  • Conflicting instructions: Sometimes I ask for two incompatible things, especially if the prompt is pretty involved.

  • Format confusion: It’s easy to overlook output instructions. But being clear here really does help, even if it’s a little bit arbitrary. In my experience, LLMs thrive with some constraints.

The difference between getting mediocre results and outstanding ones comes down to how well you can articulate what you want. I've seen this repeatedly in my own work — the time invested in crafting better prompts pays dividends in output quality and reduces the back-and-forth refinement cycle.

As you try each of these techniques on your next few AI tasks, pay attention to which ones work best for different types of requests. Before long, you'll find you’re developing an intuitive sense for which prompt structure will get you the results you need.

If you're already a prompting master, what's working for you? Have you found other prompt techniques that deliver consistently good results? I'd love to learn what’s working you!

Discussion about this post

Read Entire Article