Many people have asked me about how they can use AI better. This includes many non-technical people in my personal life, and people like former coworkers and others who have been at a more traditional software company that doesn’t utilize AI everyday in core workflows.
Some of them feel like they are starting far behind, and while I don’t think that is true at all (this wave of AI is just getting started and people are still figuring out how to best use current technology), hopefully by sharing things we have learned we can help people jump ahead in the learning curve.
So I’m going to be publishing a series of articles/blogs about different aspects of using AI.
Let’s start with “Prompting”
Prompting is already a super critical skill, and it seems very likely to be the single most important skill for knowledge workers going forward.
I personally believe if you did nothing at all right now except become an expert in prompting and formed an opinion on which models were best for various tasks, you would be building the most important white-collar skill set for the next 25 years.
We’ve sent over 250 million prompts since starting Copley.
I use prompts both professionally and personally everyday.
There are prompts we use that have gone through so many iterations and are involved in such critical workflows that they feel like a very valuable part of our business. I imagine other AI companies feel similarly, and some much stronger. Sometimes I wonder if some companies are essentially just a few key prompts…
This will be a multi-part series on prompting, and I wanna try something interesting to start.
We’re going to start by working backwards.
I am going to write a prompt for one of my near daily driver LLMs (my personal ChatGPT account) about how I use it and we’re going to discuss the results. Maybe this will allow people to catch right up to how I’m using this technology. I’ll then provide more detail in further posts for those who would like more explanation.
The Prompt
“I want to help people get better at prompting.
Can you review the chat history I have here and consider how good I am as a user creating prompts. How the prompts evolve, what the strengths and weaknesses you see.
Can you also create a list of notable chats in the form of:
– Chat
– Reason it is notable”
Before we discuss ChatGPT’s response, let’s consider the prompt above. It may seem very simple (and it is!) but there are a few key things I’m doing intentionally:
- Setting a goal — “I want to help people get better at prompting” — By explicitly telling the LLM the goal for the conversation, it creates a clear structure for the outcome. Explicitly telling an LLM the desired outcome is very helpful to getting high quality responses.
- Loading context — “review the chat history” — I am instructing the LLM what to load into its context window. Using and managing context is a critical part of using LLMs.
- Instructions/Analysis — “consider how good I am as a user creating prompts. How the prompts evolve, what the strengths and weaknesses you see” — This is the core workflow component of the prompt, the instructions. We’ve set the goal, set the context, and now we want the LLM to do something for us. Here I am intentionally providing vague instructions with bad grammar. Because of the type of request I am making here, I want to give the LLM a lot of freedom to interpret my instructions. Our request is incredibly meta, asking an LLM for feedback on how we ask an LLM, so being quite vague and using unexact language is me conveying the general idea of the request, and letting the LLM fill it in. I do this very intentionally with LLMs in both directions, ranging from incredibly specific CRITICAL MESSAGES to “tell me about x”. I also don’t let myself ever be stuck with writer’s block when talking to an LLM. With humans, you may want to convey exact meaning, but with LLMs you can quickly use them to arrive at the vocabulary to use with them. So spit out garbled ideas directionally, and then refine later when the LLMs response has helped clarify your thoughts.
- Structured Output — “Can you also create a list of notable chats in the form of” — Here, I’m providing the LLM with a form of structured output, via API you can get very explicit with the JSON response you are given back from the LLM. Asking for this structure is important for several reasons, at least one of which is not obvious. The obvious reasons are things like clarifying and formatting the response for usability/readability. One not obvious outcome from asking for this is that we will impact what the LLM does when completing the request in its entirety. So when performing the Instructions/Analysis above, it is going to pull examples from this output. So in how this prompt is written I am expecting essentially a report of notable chats that can help people become better at prompting.
Last note — The prompt we are discussing here was one I wrote in a couple seconds. It could be improved in many ways, but speed is a huge part of the value of prompting. I meant that point earlier about moving fast and avoiding writer’s block, just rip and improve as you go.
Chat Transcript
From Me
“I want to help people get better at prompting. Can you review the chat history I have here and consider how good I am as a user creating prompts. How the prompts evolve, what the strengths and weaknesses you see. Can you also create a list of notable chats in the form of: – Chat – Reason it is notable”
Got it — I’ll review your prompting style based on the chat history you’ve shared and give you an evaluation of how good you are at creating prompts. I’ll cover:
- How your prompts evolve
- Strengths
- Weaknesses / areas for improvement
- A list of notable chats with reasons they stand out
Seems good.
1) How Your Prompts Evolve
Your prompting style tends to iterate in layers:
- You often start with a broad, practical request (e.g., “create a list of ecommerce/DTC industry labels”), then refine by asking for structure, formatting, or better phrasing.
- When working on technical code, you move from a core functionality request to styling/UX polish and then to edge cases or props (e.g., the CometSpinner evolution from working to pixel-perfect alignment).
- In financial modeling or hedging strategies, you prompt first for raw math, then ask for different scenarios, and finally for cleaned-up summaries for external audiences.
This pattern shows you use prompting as a collaborative drafting process, treating ChatGPT almost like a colleague you iterate with.
All three of these examples are saying the same thing. When working with an LLM, I start with foundational things, and then work my way up, adding detail along the way. At the end, I refine the output into the desired format with any final tweaks.
When doing more complex things, that means working with the LLM to come up with a plan first, not trying to immediately jump into getting usable LLM output.
2) Strengths
- Clarity of Intent: You usually make clear what the output should look like (e.g., “full component file so I can copy/paste,” or “clean up that summary without emotes and bold”).
- Iterative Precision: You refine in small, sharp adjustments, which helps reach a “perfect” version.
- Professional Context: Your prompts tie to real deliverables (investor letters, contracts, UX polish) — they aren’t abstract, so the outputs are highly actionable.
- Multi-disciplinary Skill: You comfortably switch between domains (legal, financial, technical, design), which shows you understand how to reframe prompts depending on the task.
Clarity of Intent = Clearly articulating the desired output.
Iterative precision — LLMs are often more accurate when dealing with acute and precise instructions. Trying to do too many things at once can cause quality to drop. So be somewhat judicious with what you do in each prompt. Often, using 3 prompts to make 3 changes results in significantly higher quality than doing three things in one prompt.
3) Weaknesses / Areas for Improvement
- Occasional Over-Iteration: Sometimes you use several micro-prompts when one consolidated, well-scoped prompt could cover the adjustments (e.g., instead of “move 25px closer,” then later “wider tail,” then “head bigger,” you might define a design spec up front).
- Under-Specification at First Step: Your first prompt in a chain is sometimes too open, meaning you need extra follow-ups to lock the format (e.g., financial summaries where you later clarify “no emotes,” “bold only,” “show scenarios”).
- Context Repetition: You often restate background information (like your company context or investor framing). You could bundle constraints into a “prompt style guide” for yourself to reuse.
- Exploration vs. Decision: You explore options well but occasionally don’t decide up front whether you want brainstorming vs. a polished final — this makes the assistant give a mix of both.
Occasional Over-Iteration, Under-Specification at First Step — Mostly I am doing this on purpose, but sometimes I might be being lazy/rushing so I may adjust this some.
Context Repetition — This is definitely true for my personal ChatGPT account. For business, we have a ton of stored contextual information, and we have entire workflows that combine SQL and web search to generate context for prompts. It is a great idea to have pre-written “context dumps” available to bring into prompting. Just don’t let them go stale and make sure to update them, which LLMs make very easy!
Exploration vs. Decision — This is a good point, and related to clarity of intent above. ChatGPT thinks my ability to clearly articulate the output is my top strength but also that it is a spot for improvement. What I think this means is that if you are just exploring ideas vaguely, that is still useful context for the LLM. When I’m looking for specific outcome, I am always explicit about that, but when I am just exploring I probably forget that is also worth calling out explicitly.
4) Notable Chats
Here’s a curated list of your notable chats and why they stand out:
- SPY / VIX Hedge Strategy (0823T10:31) — Shows your ability to prompt for layered financial analysis — math, scenarios, cleaned summaries, and behavioral strategy (limit sells, staggering). Great example of iterative refinement toward professional communication.
- CometSpinner Iterations (October 2024 series) — Excellent illustration of technical + design prompting. You went from working component → visual alignment → size/props → brand color integration → perfect tail fade. Demonstrates how prompting can achieve pixel-level polish.
- Nanny Feedback Discussion (0713T21) — Strong use of ChatGPT for sensitive communication drafting. You asked for rephrasing to balance professionalism and empathy, while also integrating practical elements (bonus structure, cooking schedule).
- Investor Launch Support Ask (0103T16) — Highlights how you use prompts to hone tone for investors — turning casual “Ask” into concise, professional phrasing.
- Risk Management Conference Report (0303T22) — You used prompting for a strategic business deliverable (industry report for a proposal). Shows strong ability to frame outputs for external, high-stakes audiences.
- Onboarding Flow Integration Debugging (0104T11) — Technical debugging + front-end user experience refinement. You gave code context and asked for immediate DOM updates without refresh — a practical engineer’s use of prompting.
- Tail-Risk Hedging Strategies (0504T14) — High-level financial strategy exploration with clear constraints (hedging with loss limit). Good example of specifying constraints to shape model outputs.
- Med Spa Investment Structure (0222T15) — Shows your use of ChatGPT for deal structuring/legal drafting — turning a vague business idea into structured milestone-based compensation and ownership plan.
Overall Evaluation
You’re a strong prompter who uses ChatGPT as a cross-disciplinary partner. Your biggest strengths are clarity of refinement and real-world application. Your biggest growth opportunity is in front-loading constraints so fewer iterative prompts are needed.
Many of the prompts above seem like the types of general themes that LLMs can be exceptionally helpful for, and maybe readers would be interested in. So in future posts I’m going to go into more detail on specific chats and discuss how the LLM helped me make decisions.
Learn more about how we do this and that

