How I Triage a New Codebase in 90 Minutes
A fractional engineering engagement starts with a codebase you've never seen. You have ninety minutes to form a useful POV before the kickoff call. The seven-step triage I run, the two questions I bring back to the founder, and how AI tooling has accelerated the process.
A fractional engineering engagement starts with a codebase you've never seen. You have ninety minutes to form a useful POV before the kickoff call.
This is a recurring situation for me as a fractional engineer. A founder books a discovery call, gives me read-only access to their repo on Tuesday, and the kickoff call is Wednesday morning. Between those two events, I need to know enough about the system to ask intelligent questions, identify the load-bearing risks, and not waste the founder's time with surface-level observations they could have written themselves.
The framing that makes this work: not "understand the codebase." That takes weeks. Instead — find the load-bearing risks. The seven-step triage I run, in order.
Step 1: README.md and any architecture docs (10 minutes)
Start at the front door. The README tells me three things almost immediately: how the team thinks, how recent the project's intentional documentation is, and whether the founders or engineers wrote it.
Signals to look for: when was this last meaningfully updated? Does the "how to run locally" section reference real commands or stale ones? Are there architecture diagrams, ADRs, or design docs anywhere in the repo? An empty docs/ directory and a README that ends with "TODO: write more about this" tells you a great deal about engineering culture before you've read a single line of code.
I copy the relevant facts into my own running notes. The first paragraph of those notes is "what the team thinks the system does."
Step 2: Dependency audit (5 minutes)
Open package.json, mix.exs, requirements.txt, go.mod, or whatever the equivalent is for the language. Two questions: what major frameworks are in use, and how out of date is everything?
The dependency list is a faster summary of the system's architecture than reading the architecture docs. If I see phoenix_live_view + oban + ecto, I know the shape of the app. If I see thirty random utility libraries and no obvious framework, I know there's been turnover or a lack of opinionated leadership.
For the freshness check: a dependency that's two majors behind isn't automatically bad, but a codebase where every major dependency is two-plus versions behind is a maintenance bomb in the making. Make a note.
Step 3: The git log story (10 minutes)
Run git log --oneline --all -200. Skim the last two hundred commit messages.
What you're looking for: who's committing, what they're committing, and what the rhythm looks like. A repo where one engineer wrote 90% of the recent commits is a key-person-risk situation. A repo where the commits are mostly "wip" and "fix typo" tells you about the team's commit hygiene. A repo where every PR has a clear, conventional-commit-style message tells you the team has invested in process.
The signal that matters most for triage: pattern frequencies. Lots of "revert" commits in the recent past = unstable changes. Lots of "fix" commits referencing one specific module = that module is troubled. Use the patterns to direct what you read next.
Step 4: Test coverage and CI signal (10 minutes)
Find the test directory. Count files. Find the CI configuration. Read it. Run the test suite if you can — does it pass? How long does it take?
You're looking for three signals:
- Volume. Is there one test file or two hundred?
- Quality. Open three random test files. Do they test behavior or just exercise code?
- CI status. Is the build green? Has it been red for more than 24 hours? Is there a culture of merging on red?
The test suite is the closest thing a codebase has to a self-portrait. A team that's invested in test quality is a team that takes engineering seriously. A team where the tests don't pass on a fresh checkout is a team that has bigger problems than the ones the founder is going to tell you about.
Step 5: The hottest files (10 minutes)
Run a quick analysis to find the most-changed files in the last six months. git log --pretty=format: --name-only --since="6 months ago" | sort | uniq -c | sort -rg | head -20. The top of that list is where the action is.
Open the top three or four files. Read them. These are the load-bearing parts of the system, by definition — they're where the team is spending their engineering energy.
What you're looking for: are these files long, complicated, and full of inline comments saying "TODO: this is a hack"? Or are they crisp, well-factored, and recently refactored? The hot files tell you where the system is fragile and where it's healthy. They're also the files that an incoming engineer (or fractional engagement) will most likely need to touch first.
Step 6: The auth and data layer (15 minutes)
Now the deep dive. Find the authentication code. Find the database schema. Read both carefully.
Authentication: how does a user log in? Where are sessions stored? Is there MFA? Are passwords hashed with a current algorithm? Is there an obvious authorization layer beyond authentication? This is where I find the highest-severity bugs in early-stage codebases — usually authorization issues that the team hasn't yet noticed because they haven't been exploited.
Data layer: what tables exist, what relationships do they have, are there migrations that suggest the schema has been refactored, are the indexes sensible? The schema is the contract the system runs under. Anything wrong with it is wrong with everything else.
This is the longest step in the triage by design. If I'm going to find a deal-breaker risk, it's almost always in this section.
Step 7: The most recent incident (10 minutes)
Ask the founder (or look in the team Slack history if you have access): when was the last production incident? What broke? How was it fixed?
The incident report — written or verbal — tells you more about the engineering culture than any single artifact in the repo. A team that has a clear retro doc with five action items, three of which were completed, is a team that learns. A team where "the last incident" is met with "uh, well, last week the database fell over for an hour, I think someone restarted it" is a team that doesn't.
This step also surfaces the thing the founder is most worried about. They'll volunteer it once they sense you're asking real questions.
What you write down at the end
Ninety minutes of triage produces a one-page document with the following sections:
- What the team thinks the system does (one paragraph, from step 1)
- The shape of the stack (three lines, from step 2)
- Engineering rhythm (commit frequency, team size, hot spots — from steps 3 and 5)
- Quality signals (test coverage, CI, code-review hygiene — from step 4)
- The two highest-severity risks I found (from steps 6 and 7)
- The two questions I'm bringing back to the founder
The two questions are the most important output. They're how you signal to the founder that you've done real work in 90 minutes and have a useful POV. They're also the questions whose answers will reshape everything you do in the engagement.
Examples I've actually used: "Is the lack of audit logging on the admin endpoints intentional?" Or: "Looking at the last six months, your messaging service has been the source of three of the four production incidents — what do you and the team think is going on there?"
The AI-assisted version
The 90-minute number above predates Claude Code. The same triage now takes about half that time with AI assistance.
The pattern: I run the seven steps in parallel using a Claude Code session pointed at the repo. Step 1, 2, 3, 4, and 5 are largely automatable — I tell Claude what to look for and it produces structured summaries faster than I can read the raw files. I spend the saved time on steps 6 and 7, which still benefit from human attention.
Net result: the same depth of triage in 45 minutes instead of 90, with substantially better notes because the AI captures things I would have skimmed. The disciplined human still drives. The agent accelerates.
The takeaway
The skill being practiced here is not "reading code fast." It's knowing what to look at first. Most engineers, on entering an unfamiliar codebase, dive into the part of the system most relevant to their immediate task and form a partial picture. The triage protocol forces you to look at the system in the order that surfaces risks, not in the order that matches your task.
Useful for fractional engineers. Useful for new hires in their first day. Useful for anyone who's about to take on technical responsibility for a codebase they didn't write. Run the protocol. Take the notes. The 90 minutes will save you weeks downstream.
What you say on the kickoff call
One last note. The triage produces a written one-pager, but the founder is going to want to talk through it on the kickoff call. The framing that lands: lead with what you saw that's working, then name the two highest-severity risks, then ask the two questions.
This sequence is deliberate. Founders are sensitive about their codebase — many haven't had an outsider look at it in years. Leading with criticism puts them on defense and shrinks the conversation. Leading with what's working buys you the credibility to then talk about what isn't.
The two questions you ask are the most important moment of the call. They signal you've done real work, not just listed problems, and they invite the founder into a conversation about priorities rather than a lecture about deficiencies. Done right, the kickoff call ends with the founder saying some version of "let's start with those two things you flagged" — which is exactly the engagement you wanted to land.