About this Series
This five-part series explores the craft of building Excel models with AI, from foundational skills to advanced techniques for developing full financial models.
- Tutorial 0: Introduction
- Tutorial 1: Decomposition
- Tutorial 2: Specification
- Tutorial 3: Iteration
- Tutorial 4: Verification
- Tutorial 5: Context and Collaboration
You've learned to break down complex models into manageable pieces. You know how to specify what you want with precision. You can iterate efficiently and verify that what you got is correct. But there's a layer underneath all of these skills that determines whether the whole process feels like a struggle or a collaboration.
That layer is how you manage the conversation itself.
Working with an LLM to build Excel models isn't like using a tool. It's not like Googling for a formula or following a template. It's a collaboration — and like any collaboration, it works better when you understand your partner's capabilities, limitations, and how to communicate effectively.
This tutorial is about the meta-skill: managing context, choosing the right mode of collaboration, and developing your practice over time.
The LLM Only Knows What You Tell It
This sounds obvious, but its implications are profound.
When you sit down to build a model, your head is full of context. You know the deal, the property, the investor's preferences, your firm's conventions, the way you like to structure your spreadsheets. You know that when you say "cash flow," you mean unlevered cash flow before debt service. You know that your models always put assumptions on a separate tab. You know that the acquisition is expected to close in March, so you'll need a stub period.
The LLM knows none of this.
Every conversation starts from zero. The LLM doesn't remember the model you built last week. It doesn't know your formatting preferences. It doesn't know that your firm uses a specific waterfall structure or that your boss hates nested IF statements.
This isn't a flaw — it's a feature. It means every conversation is a fresh start, uncontaminated by previous misunderstandings. But it also means you have to be intentional about what context you provide.
The practical question is: what does the LLM need to know to help you effectively?
Too little context, and you'll get generic outputs that miss your specific requirements. Too much context, and you'll overwhelm the conversation with details that don't matter, making it harder for the LLM to focus on what does.
The sweet spot is providing context that's relevant and specific. Not everything about the deal — just what affects the model. Not your entire formatting philosophy — just the conventions that matter for this task.
For example, if you're building an acquisition model, relevant context might include:
- Property type and basic deal terms
- Hold period and exit assumptions
- Whether this is a quick screening model or a detailed underwriting
- Any unusual structural elements (seller financing, assumable debt, earnouts)
- Key outputs you care about (IRR, equity multiple, cash-on-cash)
What's probably not relevant:
- The history of how you found the deal
- Details about the physical property that don't affect cash flows
- Your opinions about the market
Get in the habit of asking yourself: Does the LLM need to know this to build what I'm asking for? If the answer is no, leave it out.
Describing What You Already Have
Sometimes you're not starting from scratch. You have an existing model and you want to modify it, extend it, or fix something that's broken.
This creates a communication challenge: how do you describe a complex spreadsheet to an LLM that can't see it?
You have several options, each with trade-offs.
Prose description works well for high-level structure. "I have a monthly cash flow model with revenue on rows 10-25, expenses on rows 27-45, and NOI calculated on row 47. Time runs horizontally from column C to column BN, representing a 5-year hold." This gives the LLM a mental map without getting lost in details.
Cell references are useful when you're asking about specific formulas or relationships. "The formula in D47 is =D25-D45, but I need it to also subtract the management fee in row 48." This precision helps when you're debugging or making targeted changes.
Actual data or formula snippets can be powerful when the LLM needs to understand the pattern. "Here's what my debt service calculation looks like: =PMT($B$12/12,$B$13*12,-$B$10). I need to modify this for an interest-only period."
Screenshots are tricky. If you can share them, they can be helpful for layout questions. But the LLM can't read the formulas in a screenshot, only the visible values — so they're more useful for "does this look right?" than "why isn't this working?"
The general principle: match the description method to what you're trying to accomplish. Structural questions need structural descriptions. Formula questions need formula details. Layout questions might benefit from visual reference.
One more consideration: sometimes it's faster to just share the data than to describe it. If you're asking the LLM to help with a rent roll analysis, pasting the actual rent roll (or a representative sample) is usually more efficient than describing its structure.
Choosing How to Collaborate
Not every interaction with the LLM should work the same way. Different tasks call for different collaboration modes, and consciously choosing the right mode makes the work go smoother.
Generative mode is when you ask the LLM to create something from scratch. "Build me a 10-year DCF for a multifamily acquisition." You specify what you want, the LLM generates it, and you review and refine. This mode is best when you have a clear picture of the end state and you want to get there quickly.
The risk in generative mode is passivity. If you just accept what the LLM produces without engaging critically, you'll miss errors and end up with models that don't quite fit your needs. Generative mode works best when paired with active verification.
Advisory mode flips the dynamic. You're building the model, and you turn to the LLM for help with specific questions. "What's the best way to structure a promote waterfall in Excel?" or "How should I handle partial-year depreciation?" You maintain control of the model while using the LLM as a consultant.
This mode is valuable when you know what you're doing overall but hit specific technical challenges. It's also useful when you want to learn — by doing the work yourself and asking for guidance, you build skills that pure generative mode doesn't develop.
Pair building mode is a back-and-forth conversation where you and the LLM construct the model together. "Let's start with the sources and uses. I'll tell you the capital stack, and you help me structure the table." Then, "Okay, now let's build the revenue section. Here's how I'm thinking about it..." This mode is slower but produces models that are more likely to fit your exact needs because you're involved at each step.
Pair building is especially valuable for complex or unusual structures where you don't trust a purely generative approach to get it right. The ongoing dialogue catches misunderstandings early.
Debugging mode is focused on fixing something that's broken. "My IRR is showing 147%, which can't be right. Here's the cash flow it's referencing." The LLM becomes a diagnostic partner, helping you trace through logic to find the error.
In debugging mode, the more specific you can be about what's wrong, the better. "It's broken" gives the LLM nothing to work with. "The debt service in year 3 is negative, but it should be $1.2M based on the loan terms I input" provides a starting point for investigation.
The key insight is that you can and should switch modes as the task evolves. You might start in generative mode to create a first draft, switch to advisory mode when you encounter a tricky calculation, shift to pair building mode for a complex section, and end in debugging mode when something doesn't tie out.
Being conscious of which mode you're in — and whether it's the right mode for the current task — makes the whole process more efficient.
Prompt Patterns That Work
After enough experience building models with LLMs, you start to notice patterns — certain prompt structures that reliably produce good results.
The skeleton-first pattern gets the structure right before filling in details. "Give me the row structure for a development pro forma. Don't worry about formulas yet — just the line items and sections." Once you've agreed on the skeleton, you flesh out each section. This pattern works because structural mistakes are expensive to fix later. Better to catch them early.
The example-based pattern shows the LLM what good looks like. "Here's how I structured a similar model last time. Follow this approach for the new one." Or, "This formula works for the acquisition scenario: =NPV(discount_rate, cash_flows) + terminal_value. Adapt it for the development scenario where cash flows are irregular." Examples anchor the LLM's understanding and reduce ambiguity.
The constraint-based pattern defines what the model must not do. "Build the debt schedule, but don't use any circular references. If debt sizing depends on cash flow which depends on debt service, use an iterative approach instead." Constraints prevent the LLM from going down paths that would create problems, even if those paths might seem logical.
The incremental pattern adds capability to an existing foundation. "We have the basic cash flow working. Now add a refinancing toggle at year 3 that lets the user choose between hold-to-maturity and a refi scenario." Incremental additions are lower risk than building everything at once, because each addition can be verified before moving on.
The Socratic pattern asks the LLM to help you think through the problem before building. "I need to model a JV waterfall with a promote structure. Before you build anything, what questions should I answer to make sure we get this right?" This surfaces ambiguities and requirements you might not have thought of. It also creates a shared understanding that makes the subsequent build go smoother.
These patterns aren't mutually exclusive. A good workflow might combine several: start with the Socratic pattern to clarify requirements, use skeleton-first for the structure, then incremental for building out sections, with constraints applied throughout.
The more you work with LLMs, the more you'll develop your own library of patterns that fit your workflow and model types.
Knowing When to Stop
There's a temptation, when working with an LLM, to keep iterating until the model is perfect. The LLM is always available, always patient, always willing to try again.
But at some point, you hit diminishing returns.
Sometimes the issue is that you're asking for something the LLM struggles with. Complex nested logic, intricate interdependencies, highly customized structures — these often reach a point where continued iteration produces minimal improvement and the fastest path forward is to do it yourself.
Sometimes the issue is communication. If you've tried three different ways to explain what you want and the LLM keeps missing it, that's a signal. Either your requirements are unclear (even to you), or this particular task is better suited to manual work.
Sometimes the time spent explaining exceeds the time it would take to just do it. Simple modifications, quick calculations, minor formatting changes — these can often be done faster manually than through a prompt-and-response cycle.
Developing judgment about when to push forward and when to take over is part of the skill. There's no formula for this. It's a feel you develop through experience, learning to recognize the signs that you're in a productive iteration versus a frustrating loop.
A useful heuristic: if you've made three attempts at the same thing without meaningful progress, pause and reassess. Maybe you need to decompose further. Maybe your specification is missing something. Maybe this isn't a good task for the LLM. But continuing to bang away without reflection rarely produces breakthrough.
Getting Better Over Time
Building models with LLMs is a skill, and like any skill, it develops with deliberate practice.
Save what works. When you craft a prompt that produces exactly what you want, keep it. Build a library of prompts for common tasks — your DCF starter, your rent roll analyzer, your waterfall structure. These become templates you can adapt, and reviewing them helps you understand what made them effective.
Reflect on what didn't work. When a session goes poorly, resist the urge to just move on. Spend a few minutes thinking about why. Was the decomposition wrong? Was the specification unclear? Were you in the wrong collaboration mode? This reflection is where learning happens.
Notice your patterns. Everyone has tendencies. Maybe you consistently under-specify time conventions. Maybe you forget to mention formatting preferences. Maybe you stay in generative mode when you should switch to pair building. Becoming aware of your patterns lets you correct them.
Calibrate your expectations. Different model types have different LLM-friendliness. Simple DCFs are straightforward. Complex waterfalls are harder. Development models with phased construction are harder still. Knowing what to expect helps you allocate time appropriately and reduces frustration when complex tasks take more iteration.
Stay current. LLM capabilities are evolving rapidly. What was hard six months ago might be easy now. What required careful workarounds might be handled automatically. Periodically test assumptions you've made about what the LLM can and can't do. You might be pleasantly surprised.
The Bigger Picture
Building Excel models with LLMs isn't just about efficiency, though it certainly is more efficient. It's about expanding what's possible.
With these tools, you can build models you wouldn't have attempted before because of time constraints. You can explore more scenarios, test more structures, and deliver more polished work. You can focus your energy on judgment and decision-making rather than mechanical construction.
But realizing this potential requires treating the LLM as a genuine collaborator — understanding how it thinks, communicating clearly, choosing the right mode of engagement, and developing your skills over time.
The investment pays off. What starts as awkward and uncertain becomes fluid and natural. You develop intuition for how to frame problems, how to iterate effectively, how to verify outputs. The LLM becomes an extension of your capabilities rather than a separate tool you have to manage.
That's the goal. Not just building better models — becoming a better builder.
About Apers AI
Apers AI was founded by researchers from Harvard, Yale, and MIT with backgrounds in quantitative asset pricing and institutional real estate. Our mission is to advance the science of capital deployment by applying autonomous agents and machine reasoning to real asset markets.