Skip to main content
Katie Academy

Pick Your Use Case

Advanced18 minutesLesson 1 of 6

Progress saved locally. Sign in to sync across devices.

Learning objectives

  • Choose a concrete capstone use case
  • Avoid building a generic system
  • Define a problem worth optimizing

The strongest ChatGPT systems are built around one real recurring problem, not around a list of features.

If you start with the tools, you tend to overbuild. If you start with the use case, the right tools become clearer. This is the difference between a demo and a system that survives Monday morning.

Show a funnel from broad role -> repeated task -> specific workflow.

What you'll learn
  • How to choose a use case that is specific enough to build around
  • Why a real recurring job is better than a broad role label
  • How to keep the capstone grounded in actual work
Why this matters

Generic systems feel impressive and then collapse in ordinary use.

Concrete systems survive because they are built around a problem that actually appears in your week: drafting client updates, researching competitors, prepping lessons, summarizing meetings, reviewing code, or organizing product thinking.

Most people who struggle with ChatGPT are not struggling with the technology. They are struggling with the question "what am I actually trying to do?" A well-chosen use case answers that question before you touch any feature. It turns ChatGPT from a toy you open when you are curious into a tool you open because there is work to do.

There is also a motivation benefit that is easy to underestimate. When your operating system is built around a real pain point, you are naturally motivated to improve it. You notice when the output is not quite right, and you revise the prompts. You notice when the workflow has a gap, and you fill it. That natural feedback loop only exists when the use case genuinely matters to you. An aspirational use case produces no feedback because you never run the workflow often enough to learn from it.

There is another reason this matters that is easy to overlook: a well-chosen use case makes it possible to learn from failure. When your use case is vague, you cannot tell whether a bad result came from the model, the prompt, or the task framing. When your use case is specific, you can diagnose the problem precisely. That diagnostic clarity accelerates improvement. You learn faster because each iteration teaches you something specific rather than leaving you confused about what went wrong.

There is also a compounding effect. A system built around a well-chosen use case improves with every cycle. The prompts get sharper. The GPT instructions get tighter. The research workflow gets faster. The continuity layer gets leaner. None of that improvement happens if the use case is too broad to produce consistent feedback. Specificity is the foundation that makes everything else possible.

The core idea

Pick a repeated job with clear inputs, repeated friction, and obvious value if improved.

That job should be specific enough to design prompts, choose tools, and judge results against. "Marketing" is too broad. "Weekly customer-insight brief for a product team" is much better.

The best use cases share three properties. They happen on a schedule, not randomly. They have a recognizable input, such as a set of notes, a data export, or a question from a colleague. And they have a clear output, something you can hand off, publish, or act on. If you cannot name the input and the output, the use case is still too vague.

It also helps to choose a use case where failure is visible. If the output is wrong, you want to notice quickly. That feedback loop is what turns the capstone into a learning system rather than a static setup.

There is a useful hierarchy for evaluating use case candidates. The strongest candidates have all three properties: they recur, they have friction, and their output is recognizable. The next tier has two of three -- perhaps the task recurs and has friction but the output is hard to evaluate. Those can still work but require more discipline. Candidates with only one property are almost always too weak to build a system around. If a task happens often but has no friction and no clear output, automating it produces nothing useful. If a task has friction but happens rarely, the system will never get enough repetition to justify the setup cost.

One more consideration: choose a use case where you are the primary user, at least initially. Building a system for someone else's workflow before you have built one for your own introduces guesswork about needs, preferences, and quality standards. Start with your own work, prove the system, and then adapt it for others.

Use the capstone to solve a real repeated problem. Avoid designing a system for an imaginary future self.

How it works

  1. List recurring tasks. Focus on work that appears weekly or at least monthly. Be concrete: "draft weekly report" is better than "communication."
  2. Choose the one with the clearest friction. Repetition and pain are both useful signals. If a task happens often but causes no friction, it may not need a system.
  3. Define it tightly. Name the input, the output, and what good looks like. A use case without a defined input is a wish. A use case without a defined output is a direction.
  4. Stress-test the choice. Ask a colleague whether they would recognize this as a real repeated job. If they hesitate, the framing is probably too abstract.
  5. Write a one-sentence version. If you cannot describe the use case in one sentence, it is not specific enough yet.

What skilled users do differently

Skilled users resist the urge to pick something impressive. They pick something boring and repeated, because that is where consistent leverage lives.

They also define the use case in terms of the workflow, not the tool. Instead of "use ChatGPT for marketing," they say "draft a weekly competitive summary from three sources, formatted for the product team." That level of specificity makes every downstream decision easier: which prompts to write, which tools to enable, and what success looks like.

They evaluate the use case against a simple return-on-investment question: how much time does this task take now, and how much could ChatGPT realistically save? A task that takes ten minutes weekly is not worth a system that takes hours to build and maintain. A task that takes ninety minutes weekly and could be reduced to thirty is a clear win.

There is one more thing skilled users do that separates them from everyone else: they write the use case down and share it. Putting the one-sentence definition on paper makes it real. Sharing it with a colleague or manager creates gentle accountability. If someone else knows what your operating system is built around, you are more likely to actually build it rather than letting it stay theoretical.

They also look for use cases with a natural quality signal. A client update has a quality signal: the client either responds positively or asks for clarification. A meeting summary has a quality signal: attendees either recognize the summary as accurate or they do not. Use cases with built-in quality signals make the system self-correcting over time, because you learn from each cycle what to adjust.

Finally, they test their use case choice by asking one question: "Would I run this workflow next week even if no one was watching?" If the answer is no, the use case is aspirational, not real.

Two worked examples

Example 1: too broad

A freelance consultant says, "I want to use ChatGPT for client work." That is a role, not a use case. Client work could mean proposals, research, follow-up emails, invoicing, or strategy. Without choosing one, every system decision will feel arbitrary. The consultant tries to build a prompt library and immediately gets stuck because a library for "client work" could contain anything. The system design stalls because there is no focal point. Every decision requires answering the question "which kind of client work?" first, and that question should have been answered at the outset.

Example 2: well-scoped

The same consultant narrows it to: "Every Monday, I draft a one-page status update for each active client using my project notes from the past week." That is a repeated job with a clear input (project notes), a clear output (status update), and an obvious quality bar (the client finds it useful). Every capstone decision can now be evaluated against that workflow.

Notice what makes this use case strong. The frequency is weekly. The input is concrete: project notes from the past week. The output is specific: a one-page status update. The quality signal is clear: the client either finds it useful or they do not. Every part of the system -- prompts, tools, continuity, and review -- can be designed against this definition.

Example 3: different domain, same principle

A university teaching assistant considers "use ChatGPT for teaching." Too broad. They narrow it to: "After each week's discussion section, I summarize the three most common student misconceptions from my notes and draft a short clarification document to share before the next session." Input: discussion notes. Output: clarification document. Frequency: weekly. The use case is tight enough that prompts, tools, and quality criteria all become obvious.

This example also illustrates why the quality signal matters. The clarification document will be tested the following week: did students understand the concepts better? That built-in feedback loop turns the capstone into a learning system that improves with each cycle.

Prompt block

Help me choose a ChatGPT use case.

Better prompt block

Help me choose one capstone use case for ChatGPT.

Please evaluate my options based on:
- how often the task happens
- how much friction it creates now
- whether ChatGPT can realistically help
- whether success would be easy to measure

Then recommend one use case that is concrete enough to build a small operating system around.

Why this works

The better prompt makes selection depend on recurrence, friction, and measurability instead of vague excitement. It also asks for a recommendation, which forces the model to commit to a specific suggestion rather than listing all possibilities equally. That constraint mirrors how skilled users think about use-case selection: they want a decision, not a menu.

By structuring the evaluation criteria explicitly, the prompt also makes the reasoning transparent. You can see why one use case was recommended over another, which makes it easier to adjust the choice if the recommendation does not quite fit.

The prompt also establishes a habit that transfers beyond this capstone. Every time you consider using ChatGPT for something new, the same four criteria apply: frequency, friction, realistic fit, and measurable success. If a task scores low on all four, it is probably not worth building a system around. If it scores high on three or four, it is a strong candidate. That evaluation framework becomes a permanent skill, not just a one-time exercise.

Common mistakes
  • Choosing a use case that is too broad to design prompts for
  • Choosing something flashy rather than something repeated
  • Starting to design the system before the problem is clear
  • Picking a task that rarely happens, which makes it hard to iterate
  • Defining the use case in terms of the tool rather than the work
  • Building a system for someone else's workflow before building one for your own
  • Confusing "I wish I could do this" with "I actually do this every week"
Mini lab
  1. List five recurring tasks from your real work. Include only tasks that happen at least twice a month. Be specific: "draft client status emails from project notes" is better than "email."
  2. Score each task on three dimensions: frequency (how often it happens), friction (how painful it is), and fit (how well ChatGPT could help). Use a simple scale of one to three for each dimension.
  3. Pick the one with the highest combined score. If two tasks tie, choose the one with higher frequency, because more repetition means faster learning.
  4. Write one sentence defining the use case: what goes in, what comes out, and who uses the result. If you cannot fit it in one sentence, the scope is too broad.
  5. Ask yourself: "Would I actually run this workflow next week?" If not, go back to step three and choose differently.
  6. Share your one-sentence use case with a colleague or friend and ask whether it sounds like a real, repeated job. Outside perspective often catches vagueness that you cannot see yourself.

Do not skip step five. A use case that does not survive the "next week" test will not survive the capstone. The point of this exercise is not to find the most exciting task. It is to find the most repeatable one.

Save your one-sentence use case definition. Every subsequent capstone decision will be evaluated against it.

Key takeaway

Your operating system should be built around repeated work, not around feature curiosity. The best use case is the one you will still be running three months from now -- not the one that sounds most impressive today.