Skip to main content
Katie Academy

Reading and Exporting the Report

Intermediate16 minutesLesson 5 of 5

Progress saved locally. Sign in to sync across devices.

Learning objectives

  • Review a research report for quality instead of passively accepting it.
  • Check claim support, gaps, and output fitness.
  • Export or reshape the report into a useful artifact for the next step.

The report is not the end of the work. It is the point where synthesis becomes review. If you read it passively, you miss the final chance to catch weak support, bad framing, or an output shape that does not serve the decision ahead. The most expensive reports are the ones that get forwarded unchecked.

Show report review marks on claims, evidence, caveats, and next-step recommendations.

What you'll learn
  • How to review a report for evidence quality and usefulness.
  • How to extract the signal without flattening caveats.
  • How to reshape the report for a specific audience or artifact.
Why this matters

This matters because a good report can still produce a weak outcome if it is exported badly. The research may be solid, but the memo, meeting note, or recommendation may still become vague or overconfident.

A disciplined report review also preserves credibility. You catch unsupported claims before they travel into decisions or documents with your name on them. Once a claim leaves the report and enters a slide deck or a memo, it becomes harder to question. Review is your last clean chance to catch problems.

There is a specific risk worth naming: deep research reports are fluent. They read well. The language is confident, the structure is logical, and the citations look thorough. That fluency makes it easy to trust the report more than the evidence warrants. A claim that is stated clearly and cited inline feels reliable, but the citation may point to a weak source, the claim may overstate what the source actually says, or the source may be outdated. Fluency is not evidence. Reading critically means separating the quality of the writing from the quality of the support.

This matters especially when the report will travel. A report you read and discard is low-stakes. A report you forward to your team, paste into a strategy document, or use to justify a budget decision is high-stakes. The review step is where you earn the right to put your name on the output. Skipping it means you are staking your credibility on work you did not verify.

What a stronger researcher does differently

A novice reads the report once, skimming for the conclusions, and then either accepts it wholesale or asks for a summary. This is the most common failure mode in deep research: the report arrives, it looks polished, and the reader treats polish as evidence of quality. They forward it, paste it into a doc, or base a decision on it without checking whether the claims are actually supported.

A stronger researcher reads the report as an editor, not as a consumer. They check the strongest claims first, because those are the ones most likely to travel into decisions and documents. If a claim says "the market is growing at 30% annually," the researcher asks: where did that number come from? Is it sourced? Is the source credible? Is the time period specified? They also check the weakest claims -- the ones that sound hedged or vague -- because those are often where the system ran out of good evidence and filled the gap with plausible-sounding language.

The strongest researchers also compare the report back to the original objective. Did the report actually answer the question that was asked, or did it answer an adjacent question that was easier to research? This happens more often than people expect, especially when the objective was slightly ambiguous. The report may be well-written and well-sourced but still miss the point. That is a framing failure, and it is catchable only if you read with the objective in mind.

There is one more habit worth adopting: asking the system to identify what the report did not cover. A strong follow-up after receiving a report is: "What are the most important questions related to my objective that this report does not answer?" This surfaces blind spots and helps you decide whether to run a follow-up query, do additional manual research, or accept the gaps as acceptable for your current decision.

Finally, stronger researchers treat the report as the beginning of a conversation, not the end. After reviewing, they ask follow-up questions: "Which of these three recommendations has the weakest evidence?" or "What would change this conclusion?" This iterative engagement extracts far more value from a single deep research run than simply reading and forwarding.

The core idea

Read the report in three passes. First, check whether it answers the objective you set. Second, check the strongest claims for support and the weakest claims for overreach. Third, decide what the next artifact should be: memo, table, recommendation note, slide outline, or decision brief.

Export is not just format conversion. It is adaptation. Different audiences need different levels of detail, caveat density, and structure. The point is to preserve the logic while changing the packaging.

The reason this three-pass approach works is that each pass catches a different class of problem. The first pass catches framing mismatches -- the report answered the wrong question. The second pass catches evidence problems -- claims that are unsupported, overstated, or missing important caveats. The third pass catches workflow mismatches -- the report is good but unusable in its current form. Skipping any pass lets a different kind of failure through.

There is also a sequencing logic to the three passes. If you start by checking evidence quality before checking alignment, you may spend time verifying claims in a report that answered the wrong question entirely. If you start by designing the export before checking evidence, you may produce a polished artifact built on unreliable foundations. The order matters because each pass builds on the previous one: alignment first, evidence second, packaging third. Following this order prevents the most expensive kinds of wasted effort.

It is worth noting that the three-pass approach does not require three separate read-throughs. With practice, you can run all three checks in a single careful reading. The key is knowing what you are looking for: alignment, evidence, and utility. As long as those three lenses are active while you read, the order and speed become a matter of personal preference.

Export formats

Deep Research reports can be exported in several formats:

  • Markdown -- the default format, useful for pasting into tools that render Markdown or for further editing.
  • Word (.docx) -- useful for sharing with colleagues or teams that work in Microsoft Word.
  • PDF -- useful for final distribution or archiving.

There is also a fullscreen document viewer with a built-in table of contents, which makes it easier to navigate longer reports before you decide how to export. Use the viewer to review and then export in whichever format fits the downstream workflow.

The choice of export format matters more than people think. A PDF is a finished artifact -- it signals "this is done" and discourages further editing. A Word document invites revision and collaboration. Markdown is best when the report will be pasted into another tool or further transformed. Choose the format based on what happens next, not on personal preference.

When exporting for a team, consider what happens after the artifact lands. Will people annotate it? Will it be discussed in a meeting? Will it be embedded in a larger document? Each scenario favors a different format. Thinking one step downstream -- a core skill from Module 01 -- applies to export just as much as it applies to prompting.

How it works

  1. Review the report against the original objective before you worry about style. Open the original objective side by side with the report and check whether each part of the objective was addressed. If something was skipped or answered superficially, note it.
  2. Mark claims that need stronger support, clearer caveats, or tighter wording. Pay special attention to quantitative claims ("the market grew 40%") and comparative claims ("X is clearly better than Y"). These are the claims most likely to travel into your downstream work, so they need the strongest support.
  3. Ask ChatGPT to reshape the report for the target audience while preserving uncertainty and source quality signals. Be explicit about what the audience needs: a non-technical executive needs different framing than a domain expert, and a decision memo needs different structure than a background brief.
  4. Before sharing or using the export, do one final check: would you be comfortable defending every major claim in a meeting? If any claim makes you hesitate, it needs either stronger sourcing or an explicit caveat.

This four-step process becomes faster with practice. The first time, it may take ten to fifteen minutes. After a few reports, you will develop an instinct for where problems tend to hide. Most people find that quantitative claims and sweeping comparative statements are the most common trouble spots.

One practical technique: before starting the full review, read only the conclusion section. If the conclusion sounds like it could apply to any company or any decision, the report has probably drifted from your objective. A good conclusion should be specific enough that a stranger could read it and understand what decision it supports and what action it recommends. If it reads like a generic summary, the problem is usually in the framing, not in the evidence.

Two worked examples

Example 1: passive consumption

Summarize this report for me.

This prompt asks for compression, not adaptation. The result will be a shorter version of the report that preserves the same structure, the same emphasis, and the same problems. If the report overweighted a weak source, the summary will too. If the report buried an important caveat in paragraph twelve, the summary will likely drop it entirely. "Summarize" is the least useful thing you can ask for after a deep research run, because it throws away your chance to reshape the output for its actual purpose.

The deeper issue is that "summarize" has no target. It does not say who the summary is for, what decisions it supports, or what level of detail matters. As a result, the system applies generic compression: keep the main points, drop the details. That is almost never what you actually need.

Example 2: converting to a decision artifact

Turn this deep research report into a one-page decision memo.

Requirements:
- keep the original objective visible
- preserve the strongest supporting claims and the main caveats
- cut repetition and background detail
- end with 3 recommended next steps
- if the report contains weakly supported conclusions, flag them rather than polishing them away

This version defines the target artifact (decision memo), sets structural requirements, and -- critically -- includes an instruction to preserve uncertainty rather than smooth it away. That last point matters because the default behavior of summarization is to make things sound cleaner and more confident than the underlying evidence supports. Explicitly asking the system to flag weak conclusions counteracts that tendency.

Notice the structure of the prompt: it names the artifact, sets length and content constraints, and includes an honesty instruction. That is the pattern for all good export prompts. The artifact defines the shape. The constraints define the boundaries. The honesty instruction prevents the most common failure mode.

Example 3: reshaping for a different audience

Adapt this research report into a briefing note for a non-technical executive team.

Requirements:
- lead with the bottom line: what should we do and why
- limit to 500 words maximum
- replace technical terminology with plain language equivalents
- include a "confidence level" note next to each major recommendation (high, medium, or low, based on source quality)
- preserve the two strongest counterarguments so the reader knows what the opposition would say
- end with the single most important question that this research did NOT answer

This example shows a more demanding export. Notice the "confidence level" instruction -- this forces the system to evaluate its own evidence quality per recommendation, which produces a much more honest artifact than a flat summary. The instruction to name the unanswered question is also valuable: it turns the export into an honest assessment rather than a closed case.

The contrast between Example 1 and Examples 2 and 3 captures the key principle of this lesson. "Summarize" delegates all decisions about structure, emphasis, and audience to the system. "Convert into a decision memo with these requirements" keeps those decisions where they belong -- with you, the person who knows what the output needs to accomplish.

Why this works

The better prompt converts the report into a defined artifact while protecting against the common failure mode of smoothing away uncertainty. The underlying mechanism is that export is a second act of specification. Just as the research objective shaped the report, the export prompt shapes the final deliverable. When you specify the artifact type, the audience, the constraints, and the honesty requirements, you get an output that serves its purpose. When you just say "summarize," you get a shorter version of whatever the system already produced, with all its original biases intact.

The instruction to flag weakly supported conclusions is doing especially important work. Left to its own defaults, the system produces a smooth, confident summary -- that is what language models are trained to do. But confidence in the export should reflect confidence in the evidence. When you explicitly instruct the system to preserve uncertainty, you create an output that is honest about what it knows and what it does not. That honesty is what makes the artifact trustworthy enough to use in real decision-making. A polished summary that rounds up weak evidence into strong conclusions is worse than useless -- it is actively misleading.

Common mistakes
  • Treating the report as final instead of reviewed work. A polished report is not necessarily an accurate one.
  • Exporting into a cleaner format but losing the caveats that matter. Summarization naturally strips uncertainty.
  • Failing to compare the report back to the original objective. The report may answer an adjacent question rather than the one you asked.
  • Forwarding the raw report to decision-makers without adapting it for their context, time constraints, or expertise level.
  • Not checking the sources cited in the report. A claim with an inline citation looks credible, but the source may be outdated, low-authority, or misinterpreted.
Mini lab
  1. Choose a deep research report you have generated, or a long sourced answer from a previous session.
  2. Read it once and check: does it actually answer the objective you set? Write one sentence noting any gap between what you asked for and what was delivered.
  3. Read it again and identify the single strongest claim and the single weakest claim. For each, note whether the source is cited and whether the source is credible.
  4. Convert the report into a memo, brief, or recommendation note that a colleague could use in under five minutes.
  5. In one sentence, name the most important thing you changed during export -- what did the original report get wrong, overstate, or structure poorly for the actual audience?

Do not skip step five. Naming what you changed is how you build the editorial instinct that makes every future report review faster and sharper. Most people find a consistent pattern in what they have to fix -- learning yours is the most transferable skill from this exercise.

Key takeaway

A deep research report becomes valuable when you review it critically and adapt it into the artifact your real workflow needs. The report is not the end product. The decision memo, the briefing note, or the recommendation that you build from it is the end product. Treat the report as raw material, not as a finished deliverable.