Skip to main content
Katie Academy

GitHub with ChatGPT

Advanced18 minutesLesson 2 of 5

Progress saved locally. Sign in to sync across devices.

Learning objectives

  • Understand what GitHub connections enable
  • Know when repository-aware workflows help
  • Keep human review in the loop

GitHub-connected ChatGPT workflows become useful when repository context matters.

That includes understanding code structure, reviewing changes, generating pull-request-oriented help, or connecting cloud-based coding work to a repository. The key insight is that pasting code snippets into a chat window is a fundamentally different experience from giving the model access to the structure, history, and relationships across a codebase.

Show a repository feeding into understand, review, and propose changes.

What you'll learn
  • What GitHub connectivity changes in practice
  • How to set up the GitHub connection
  • Where repository-aware help is useful across different ChatGPT modes
  • Why review and testing still matter

Setting up the connection

To connect GitHub: go to Settings > Apps > GitHub, authorize on GitHub, and select which repositories ChatGPT should have access to. You can adjust repository access later without reconnecting.

A practical tip: start by connecting only the repositories you actively need help with. Granting access to all repositories at once is rarely necessary and increases the surface area of what the model can see. You can always add more repositories later as specific needs arise.

Why this matters

Code help gets much more valuable when the assistant can orient itself inside a real repository rather than guessing from pasted fragments.

The GitHub connection now works across multiple ChatGPT surfaces -- not just chat. It integrates with apps and sync, file search, Deep Research, and agent mode. That means repository context can flow into research tasks, multi-step agent workflows, and connected-tool operations, not just single-turn code questions.

The reason this matters so much is that most coding questions are context-dependent. A function that looks correct in isolation might violate conventions established elsewhere in the codebase. A proposed refactor that seems clean might break assumptions in a module the model has never seen. When ChatGPT can see the full repository, it can account for those dependencies. When it cannot, it fills the gaps with generic assumptions, and generic assumptions in code are often wrong in ways that are expensive to debug.

At the same time, connected code workflows increase the need for discipline. Repository context helps, but it does not replace review, testing, or judgment. The model can see the code, but it does not know the full history of design decisions, team conventions, or business constraints that shaped it.

Deep Research with GitHub

Since May 2025, the Deep Research GitHub connector lets ChatGPT work with your actual codebase during research tasks. It can break down product specs into technical tasks, summarize code structure across a repository, and understand API implementations using real code rather than abstractions. This is especially valuable for onboarding, architecture review, and technical planning.

The practical implication is significant: a new team member can use Deep Research to build an understanding of how a codebase is structured, what patterns it follows, and where the main entry points are -- in hours rather than days. The output is not a substitute for reading the code, but it provides an orientation that makes the first real code reading far more productive.

The core idea

GitHub connectivity is about context and workflow, not blind automation.

It helps ChatGPT reason against real repository structure and collaborate more effectively on code-related tasks. The more consequential the change, the more important it is to keep review checkpoints explicit.

The value of repository awareness follows a clear pattern. For tasks where context changes the answer -- understanding how a module fits into the larger system, reviewing whether a PR follows project conventions, or generating documentation that reflects actual implementation -- the connection is genuinely valuable. For tasks where the question is self-contained -- explaining a language concept, debugging a small algorithm, or writing a utility function from a clear specification -- the connection may add nothing useful.

Skilled users learn to recognize which category a task falls into before deciding whether to use the connection. That judgment is more important than the connection itself.

There is also a scope-quality tradeoff that matters here. A repository connection gives the model access to a large amount of code, but that does not mean it processes all of it with equal depth. When you ask about a specific module, the model can focus its attention effectively. When you ask about the whole repository, attention gets spread thin and the analysis becomes shallower. This is not a limitation unique to AI -- human reviewers face the same tradeoff. The difference is that the model will still produce confident-sounding output even when its analysis is shallow, which makes scope discipline even more important.

Finally, repository-connected workflows work best when they complement your existing development process rather than replace parts of it. The connection is a lens that helps you see the code from a different angle -- catching inconsistencies, surfacing patterns, or generating documentation that reflects actual implementation. It is not a substitute for the engineering judgment, domain knowledge, and team context that you bring to the codebase.

Use GitHub-connected workflows when repository context changes the answer. Avoid pretending that connection removes the need for code review.

How it works

  1. Identify the repo-aware task. Explanation, review, change proposal, or issue-oriented work are common examples. The key question is whether repository context would change the answer. If it would not, the connection adds nothing.
  2. Scope the request tightly. Specify the module, file, or PR you want analyzed. A request scoped to src/auth/ produces better results than one aimed at the entire repository.
  3. Use the connection where it adds real context. Repository structure, diffs, related files, and cross-module dependencies are the kinds of context that make the connection valuable.
  4. Keep verification in the loop. Review the output, test changes, and treat connection as leverage rather than authority. The model can see the code, but it cannot see the intent behind the code.

What skilled users do differently

A less experienced user connects GitHub and immediately asks ChatGPT to "fix the bug" or "improve the code," trusting that repository access will make the suggestions correct. They treat the connection as a shortcut past review.

A skilled user treats the GitHub connection as context, not authority. They use it to orient the model inside the codebase so that suggestions are more informed, but they still review diffs carefully, run tests, and cross-check against project conventions. They also scope their requests tightly. Instead of "review this repository," they ask about a specific module, a particular PR, or a focused question about how two components interact. Tight scope plus rich context produces the best results. Broad scope plus rich context produces plausible-sounding suggestions that may miss important nuances.

Skilled users also develop a habit of specifying what kind of answer they want from the connected workflow. "Explain how this module works" is different from "identify inconsistencies between these two modules" which is different from "generate documentation that matches the actual implementation." Each request type uses the repository context differently, and being explicit about the type produces sharper results.

There is also a temporal awareness that skilled users bring. Repository context is a snapshot. If the codebase has changed significantly since the last sync, the model may be working with outdated information. Skilled users check when the repository was last synced and account for recent changes that might not be reflected in the model's view.

Two worked examples

Example 1: a weak repository-connected workflow

I connected my repo. Please review the whole codebase and suggest improvements.

This fails for a predictable reason. "The whole codebase" is too broad for useful review. The model will produce generic suggestions -- add more tests, improve error handling, consider refactoring long functions -- that could apply to any repository. The connection adds no value because the request does not leverage the specific context it provides.

Example 2: a strong repository-connected workflow

Review the authentication module in src/auth/ against the patterns used in src/middleware/.

Focus on:
- Whether the error handling in auth follows the same conventions as middleware
- Whether there are any inconsistencies in how tokens are validated
- Whether the auth module's exports match what the middleware expects

List specific files and line references for any issues you find.

This works because the request is scoped to specific modules, asks for a specific kind of analysis (consistency between two parts of the codebase), and requests concrete references. The repository connection is essential here -- without it, the model could not compare patterns across modules.

Prompt block

Help me with this GitHub repo.

Better prompt block

Help me with a repository-aware task.

I want support with:
[describe the task]

Please:
- explain what repository context would improve here
- tell me what should still be reviewed manually
- separate understanding, proposed changes, and validation steps

Why this works

The better prompt keeps the workflow grounded. It distinguishes between what the connection enables and what still needs human review. This separation is important because the most common failure in connected workflows is not bad suggestions -- it is the erosion of review discipline. When the model seems to "know" the codebase, users tend to trust its output more than they should. The better prompt counters that tendency by making the review requirement explicit from the start.

It also structures the response into three distinct phases -- understanding, proposed changes, and validation -- which maps directly to how experienced developers actually work. They orient themselves in the code first, then propose changes, then verify those changes are correct. By asking the model to follow that same sequence, you get output that is easier to review and more likely to be correct at each stage. This three-phase pattern is reusable across any repository-connected task.

Common mistakes
  • Assuming repository access makes suggestions automatically correct
  • Skipping review because the workflow feels integrated
  • Using a connected repo when the task could be handled more simply with local context
  • Asking for whole-codebase analysis instead of scoping to specific modules or concerns
  • Forgetting that repository access gives the model code structure but not the full history of design decisions or team conventions
  • Granting access to all repositories when only one or two are needed for the current task
  • Forgetting to check when the repository was last synced, which can lead to analysis based on outdated code
Mini lab
  1. Pick one code-related workflow you already do: onboarding, review, debugging, or documentation.
  2. Write down what GitHub context would improve in that workflow.
  3. Write down what you would still insist on checking yourself, even with the connection active.
  4. Draft a scoped prompt that uses the connection for a specific, bounded question about one part of the codebase.
  5. In one sentence, name the boundary between what the connection should handle and what remains your responsibility.

Reflect on whether that boundary would shift for a high-stakes change versus a low-risk one. The boundary is not fixed -- it moves with consequence.

Do not skip step five. The boundary between delegation and review is the most important judgment call in any connected coding workflow.

Key takeaway

GitHub-connected workflows are most useful when repository context genuinely changes the answer and review remains explicit. The connection is a tool for informed assistance, not a substitute for engineering judgment. The best results come from tight scope, clear questions, and review discipline that stays firm even when the model seems to understand the codebase deeply.