About this Series
This five-part series explores the craft of building Excel models with AI, from foundational skills to advanced techniques for developing full financial models.
- Tutorial 0: Introduction
- Tutorial 1: Decomposition
- Tutorial 2: Specification
- Tutorial 3: Iteration
- Tutorial 4: Verification
- Tutorial 5: Context and Collaboration
You've decomposed the problem. You've written a careful specification. You hit enter and the LLM produces... something that's 70% right.
The structure is correct but the debt service formula references the wrong row. The waterfall logic is close but it's calculating preferred return on committed capital instead of contributed capital. The formatting is reasonable but not quite how you'd do it.
This is normal. This is expected. And if you're frustrated by it, you're thinking about the process wrong.
The skill isn't writing a prompt that produces a perfect model. The skill is knowing how to move from 70% to 100% efficiently. That's iteration — and it's where most people either waste hours or give up entirely.
The Iteration Mindset
Let's start with a reframe. When you build a model manually, you don't write every formula correctly the first time. You build, test, notice something's off, fix it. You might restructure a section entirely after seeing how it interacts with another section. This is normal and expected.
Working with an LLM is the same process, compressed and externalized. Instead of noticing errors as you type formulas, you notice them when reviewing output. Instead of fixing formulas directly, you describe what needs to change. The iteration loop still exists — it just looks different.
The people who struggle with LLMs often expect the first output to be final. When it isn't, they feel the tool has failed them. They either abandon it or start over with a new prompt, hoping this time it'll be perfect.
The people who succeed with LLMs expect iteration. They budget time for it. They treat the first output as a draft, not a deliverable. And they've developed techniques for iterating efficiently.
The 70/30 Rule
Here's a useful mental model: the LLM gets you 70% of the way there. Iteration gets the remaining 30%.
That 70% represents enormous value. It's the structure, the basic formulas, the layout, the boilerplate. Work that might take you two hours happens in seconds. But it's not finished work. It's a draft.
The 30% is where your expertise matters. It's catching the formula that references the wrong cell. It's knowing that your firm calculates preferred return differently than the "standard" way. It's the formatting details that make a model professional rather than amateur.
Some people hear "70%" and think the LLM isn't good enough. But consider: if you can get 70% of a model for free and spend your time on the 30% that requires judgment, that's a significant improvement over building 100% manually.
The mistake is expecting 100% and being disappointed by 70%. The right expectation is 70%, with a clear process for the remaining 30%.
Coarse to Fine: The Iteration Sequence
Not all iteration is equal. There's an optimal sequence, and violating it creates unnecessary work.
Pass 1: Structure
First, verify that the architecture is right. Before you look at any formula, ask:
- Are all the sections present?
- Is the time structure correct (right number of periods, right granularity)?
- Does the layout match what you specified?
- Are the rows and columns organized logically?
If the structure is wrong, fix it before anything else. There's no point perfecting formulas in a section that needs to be reorganized.
At this stage, your feedback is architectural:
"The structure looks good, but I need a separate section for tenant improvements between the revenue section and the NOI calculation. Add rows for TI costs per SF, TI amortization period, and annual TI amortization expense."
Pass 2: Logic
Once the structure is right, examine the formulas. Not whether they're calculating correctly yet — whether they're calculating the right thing.
- Does the revenue formula account for vacancy?
- Is debt service based on the right loan balance?
- Does the exit value use the right cap rate and the right NOI?
At this stage, your feedback is about logic, not calculation:
"The debt service formula is using the initial loan amount for all periods. It should use the beginning-of-period balance, which decreases as principal is paid down."
Pass 3: Calculation
Now verify the math. Do the formulas reference the correct cells? Are the calculations producing sensible numbers?
This is where you test with real inputs:
"When I put in a $10M purchase price at a 5% cap rate, the Year 1 NOI should be $500,000. The model is showing $416,667. Check the NOI calculation."
Pass 4: Edge Cases and Polish
Finally, handle the details. What happens at year 1? At the final year? When an input is zero? This is also where you address formatting, labeling, and usability.
"The IRR formula returns an error if I set the hold period to 1 year. Add error handling so it returns 'N/A - Hold period too short' instead of #NUM!"
The reason this sequence matters: errors in early passes propagate to later passes. If you fix a calculation error but the underlying logic is wrong, you'll just have a different calculation error. If you perfect the formulas but then restructure the section, you'll need to redo the formulas.
Coarse to fine means you fix the big things first and progressively refine toward the details.
Communicating What's Wrong
The quality of your iteration feedback determines how quickly you converge on a working model. Vague feedback produces vague fixes. Specific feedback produces specific fixes.
Bad feedback:
"The waterfall isn't working right."
This tells the LLM almost nothing. Which part of the waterfall? What's it doing wrong? What should it do instead?
Better feedback:
"The waterfall is calculating preferred return incorrectly. It's currently calculating 8% on the total equity commitment ($5M). It should calculate 8% on the unreturned capital balance, which decreases as distributions are made."
This identifies the specific problem, explains what's happening now, and explains what should happen instead.
Best feedback:
"The preferred return calculation in row 45 is wrong. Currently the formula is =$C$10*0.08, which gives $400,000 every year (8% of the $5M commitment).
It should be: beginning unreturned capital balance × 8%. The unreturned capital balance starts at $5M and decreases by any return of capital distributions in prior periods. So if we distribute $1M of return of capital in Year 1, the Year 2 pref should be calculated on $4M, not $5M."
This gives the location, shows exactly what's wrong, and provides the logic for the correct calculation.
The pattern: Location + Current Behavior + Expected Behavior + Underlying Logic
The "Show, Don't Just Tell" Principle
Sometimes the fastest way to communicate what's wrong is to show it.
Using examples:
"The rent escalation isn't compounding correctly. Here's what I'm seeing:
- Year 1: $1,000 (correct, this is the input)
- Year 2: $1,030 (correct, 3% increase)
- Year 3: $1,060 (wrong — should be $1,061, which is $1,030 × 1.03)
The formula seems to be adding 3% of the base rent each year instead of compounding."
Using expected outputs:
"With these inputs — $10M purchase, 5% cap, 3% rent growth, 10-year hold, 6% exit cap — the IRR should be approximately 12-14%. The model is showing 8%. Something is wrong in either the cash flows or the exit value calculation."
Using formulas:
"Row 32 currently has: =B15*B$8
It should have: =B31*(1+$C$5)
This makes Year 2+ rent reference the prior year's rent times the growth rate, rather than the base rent times a fixed factor."
When you can pinpoint the exact cell and show the exact fix, iteration becomes trivially fast.
Isolating Problems
When multiple things are wrong, resist the urge to list them all at once. Fix one thing at a time.
Why? Because problems interact. Fixing one issue might resolve another. Or fixing one issue might change the context enough that your description of the second issue no longer makes sense.
The most efficient pattern:
- Identify the most fundamental problem (usually structural or logical, not calculation)
- Describe it precisely
- Get the fix
- Verify the fix worked
- Move to the next problem
This feels slower than listing everything at once. It's actually faster because each fix lands cleanly and you're never debugging a fix that was based on a stale understanding of the model.
The exception: if you have several small, independent issues (formatting fixes, label changes, minor formula corrections in unrelated sections), you can batch them. But if the issues are related or complex, one at a time is faster.
Recognizing Convergence vs. Divergence
Good iteration converges. Each round gets you closer to a working model. The errors get smaller, more localized, easier to fix.
Bad iteration diverges. Each fix creates new problems. You're going in circles. The model isn't getting better, just differently broken.
Signs of convergence:
- Each iteration has fewer issues than the last
- Issues are becoming more minor (formatting, edge cases vs. fundamental logic)
- Fixes are sticking — you're not re-fixing the same things
- The model is becoming more usable, not just differently structured
Signs of divergence:
- Fixing one thing breaks another
- The same issue keeps recurring with slight variations
- You're on iteration 7 and still dealing with structural problems
- The LLM seems to be "forgetting" previous fixes
When you're diverging, continuing to iterate is often the wrong move. You need to step back and understand why.
When to Reset
Sometimes the fastest path forward is starting over. This feels like giving up, but it's often the right call.
Reset signals:
- The structure is fundamentally wrong and fixing it would require touching most of the model
- You've done 5+ iterations without meaningful progress
- The model has become a patchwork of fixes that's harder to understand than a fresh build would be
- You've learned something from the failed attempt that would make your next specification much better
The reset decision framework:
Estimate two things:
- How many more iterations to fix this version?
- How long to rebuild with a better specification?
If rebuilding is faster — and it often is after 5+ failed iterations — rebuild.
Salvaging value from failed attempts:
A reset doesn't mean the work was wasted. Before you start over:
- Identify what the LLM got right — you can reference this in your new prompt
- Identify where the specification was inadequate — fix this in the new prompt
- Save any complex formulas that worked — you can ask the LLM to reuse them
"I'm going to start over on this model. The previous version had the right structure for the revenue section — keep that approach. But the debt section needs to be fundamentally different: instead of sizing the loan on purchase price, size it on a DSCR constraint with a 1.25x minimum."
Efficient Iteration Patterns
With experience, you'll develop patterns that make iteration faster. Here are some that work well:
The "Fix and Extend" Pattern
Get one section working completely before moving to the next. Don't iterate on the whole model at once.
"Let's focus on the revenue section first. [Iteration until revenue is correct.] Good, revenue looks right. Now let's move to operating expenses."
This contains errors to one section at a time and gives you a stable foundation to build on.
The "Reference Solution" Pattern
When the LLM isn't understanding what you want, show it an example.
"Here's how the preferred return should be calculated. In Year 1:
- Beginning capital balance: $5,000,000
- Preferred return (8%): $400,000
- Cash distributed: $300,000
- Applied to pref: $300,000
- Unpaid pref carryforward: $100,000
- Return of capital: $0
- Ending capital balance: $5,000,000
In Year 2:
- Beginning capital balance: $5,000,000
- Accrued unpaid pref: $100,000
- Current year pref: $400,000
- Total pref owed: $500,000
- Cash distributed: $600,000
- Applied to pref: $500,000
- Return of capital: $100,000
- Ending capital balance: $4,900,000
Structure the waterfall to calculate like this example."
A worked example often communicates more than a paragraph of description.
The "Explain Then Fix" Pattern
When you're not sure what's wrong, ask the LLM to explain before asking it to fix.
"The IRR is showing 8% but I expected something around 12-14%. Before fixing anything, walk me through how the IRR is being calculated. What cash flows is it using? What dates?"
The explanation often reveals the problem more clearly than inspection would.
The "Parallel Path" Pattern
When you're not sure which approach is right, try both.
"I'm not sure whether to model the TI as a one-time cost at lease signing or amortize it over the lease term. Build it both ways in adjacent columns so I can see the impact and decide which approach to use."
This is particularly useful early in a model when structural decisions have big downstream effects.
Knowing When You're Done
Iteration can continue indefinitely. There's always another edge case to handle, another formatting improvement to make. You need to know when to stop.
"Good enough" criteria:
- All core calculations are correct and verified
- The model handles realistic inputs without errors
- Someone else could understand the structure and logic
- Edge cases that matter for this deal are handled
- Formatting is professional (not perfect, professional)
Perfectionism traps to avoid:
- Handling edge cases that will never occur in this deal
- Formatting details that don't affect usability
- Making the model "general purpose" when you need it for one specific deal
- Achieving elegance in formulas that already work correctly
A model that's 95% polished today is more valuable than a model that's 100% polished next week. Know when to stop iterating and start using.
The Feedback Loop to Specification
Every iteration teaches you something about specification.
When you find yourself explaining the same type of issue repeatedly — "no, preferred return should be calculated on unreturned capital, not committed capital" — that's a signal. Next time, include that detail in your initial specification.
Keep a running list of specification gaps that cause iteration. Over time, your specifications will become more complete, and your iteration cycles will become shorter.
The goal isn't to eliminate iteration. The goal is to make iteration about refinement rather than correction — about polish rather than fixing fundamental misunderstandings.
Practical Exercise
Take a model you've built recently — or one you need to build — and try this:
- Write a specification following the framework from Tutorial 2
- Generate a first draft
- Apply the coarse-to-fine sequence: structure → logic → calculation → polish
- For each issue, practice writing precise feedback: location, current behavior, expected behavior, logic
- Track how many iterations each section takes
- After you're done, note which specification gaps caused the most iteration
This deliberate practice builds the intuition for efficient iteration. After a few models, you'll find yourself writing better specifications and iterating faster.
Next in the Series
You've decomposed the model, specified it clearly, and iterated to a working version. But how do you know it's actually correct?
Part 4: Verification covers the systematic process of validating LLM-generated models — the structural checks, formula traces, numerical sanity tests, and edge case verification that give you confidence the model does what it should.
Iteration gets you to a model that looks right. Verification confirms that it is right.
Learn More
This is a five-part series on the craft of building Excel models with AI.
- Tutorial 0: Introduction
- Tutorial 1: Decomposition
- Tutorial 2: Specification
- Tutorial 3: Iteration
- Tutorial 4: Verification
- Tutorial 5: Context and Collaboration
About Apers AI
Apers AI was founded by researchers from Harvard, Yale, and MIT with backgrounds in quantitative asset pricing and institutional real estate. Our mission is to advance the science of capital deployment by applying autonomous agents and machine reasoning to real asset markets.