Real estate analysts model manually because the industry's culture prioritizes control and precision over speed, and most firms lack the training infrastructure to implement AI-assisted modeling at scale. Despite the availability of automation tools, institutional real estate teams continue building pro formas cell-by-cell due to entrenched workflows, skepticism about AI accuracy, and the absence of quality assurance frameworks that would allow analysts to verify AI-generated outputs confidently.
Relevant Articles
- Want to see technical limits? See [Why Generic AI Can't Build Complete Excel Models].
- Ready to change your workflow? Review [How to Stop Building Pro Formas from Scratch].
- Understand where the industry is headed: [The Future of Real Estate Modeling].
The Current State of RE Modeling
Walk into any institutional real estate shop—pension funds, private equity firms, REIG platforms—and you'll find analysts spending 60-80% of their week building Excel models from scratch. They're copying last quarter's acquisition model, deleting the old deal specifics, and manually reconstructing cash flow waterfalls, debt sizing blocks, and IRR calculations for the next asset.
This is not a 2015 problem. This is happening right now, in 2026, at firms managing billions in AUM. The tools exist to automate significant portions of this work. AI can draft operating cash flows. Template libraries can standardize waterfall structures. Yet the default mode remains manual construction.
The resistance isn't technical ignorance. These analysts know Python exists. They've heard of ChatGPT. Some have even experimented with AI code generation. But when it's time to underwrite a $50M acquisition with a 90-day close timeline, they revert to what they know: Excel, manual formulas, and the modeling patterns they learned as junior analysts.
The inertia runs deeper than individual preference. It's institutional. Modeling workflows are embedded in training programs, review processes, and quality control systems that assume human construction at every step. When a VP reviews a model, they expect to trace every formula back to first principles. They want to see the logic unfold across tabs in a structure they recognize. AI-generated models—even correct ones—disrupt this review cadence.
This creates a paradox: firms complain about modeling bottlenecks while simultaneously refusing to adopt tools that would eliminate them. The problem is not that automation doesn't work. The problem is that the industry hasn't built the verification and iteration frameworks needed to trust it.
Why Generic Tools Fall Short
Generic AI platforms—ChatGPT, Claude, Copilot—can generate formulas. They can draft NPV calculations, build sensitivity tables, and even construct basic waterfalls. But they fail at institutional-grade modeling because they lack domain context and can't iterate on feedback without losing structural coherence.
Here's what happens when an analyst prompts ChatGPT to "build a three-tier waterfall with an 8% pref and a 15% hurdle": The AI returns a formula block that looks plausible. It has IF statements. It references IRR. But when the analyst tests it against known deal outcomes, the numbers don't match. The lookback logic is wrong. The hurdle calculation uses project-level IRR instead of LP IRR. The catch-up tier doesn't actually catch up.
The analyst now faces a choice: fix the formula manually (which defeats the purpose of using AI) or re-prompt the AI with corrections. Most analysts choose the former. They don't have time to teach the AI real estate finance. They need a working model today, not a tutoring session with a chatbot.
The deeper issue is iteration. When an analyst identifies an error in an AI-generated waterfall, they can't simply say "fix the lookback logic." The AI doesn't retain the structural context. It regenerates the entire block, often introducing new errors while fixing the original one. This is where the iteration meta-skill becomes critical: the ability to refine AI outputs through structured feedback loops without losing coherence.
Generic tools weren't designed for this. They were built for one-shot generation, not multi-step refinement. Real estate modeling requires 5-10 iterations before a waterfall is production-ready. Without a framework to guide that process, analysts abandon AI and return to manual construction.
Template Limitations
Some firms attempt to solve the manual modeling problem with template libraries. They build a "master" acquisition model, lock down the structure, and train analysts to populate inputs without modifying formulas. This works for commoditized assets—think single-tenant NNN or core multifamily—but breaks down the moment deal complexity increases.
Consider a value-add office repositioning with phased lease-up, mezzanine debt, and a promote structure tied to both project-level and property-level IRR hurdles. The template can't accommodate this. The waterfall logic is different. The debt amortization schedule needs custom inputs. The analyst is back to manual construction, but now they're fighting the template's rigid structure instead of building from a blank sheet.
Templates also calcify bad practices. If the original builder made a modeling error—say, calculating IRR on equity deployed instead of equity contributed—that error propagates across every deal underwritten with the template. Analysts inherit technical debt without knowing it. When someone finally catches the error, the firm faces a decision: fix the template and re-audit every prior deal, or ignore it and hope it doesn't matter.
This is why template-based automation fails at scale. Real estate deals are heterogeneous. Every asset class, capital structure, and GP agreement introduces variations that templates can't anticipate. Firms need modeling systems that adapt to deal specifics, not analysts who adapt to template constraints.
The alternative is modular, composable modeling—building reusable calculation blocks (preferred return, IRR hurdles, debt sizing) that can be combined and customized per deal. But this requires decomposition skills that most analysts never learn. They're trained to build models top-to-bottom in a single pass, not to construct logic libraries that can be reconfigured.
The Training Gap
Most real estate analysts learn modeling through apprenticeship. A senior analyst hands them a prior deal model and says, "use this as a template." The junior analyst reverse-engineers the structure, copies the formula patterns, and repeats them on the next deal. There's no formal instruction on modeling logic, no curriculum on verification testing, and no framework for evaluating whether a model is correct beyond "does the output seem reasonable?"
This works when the senior analyst is disciplined and the deal structures are consistent. But it breaks down when complexity increases or when the junior analyst encounters a modeling pattern they haven't seen before. They don't have the conceptual tools to decompose the problem. They don't know how to test their assumptions. They build a model that looks right but produces wrong answers under edge-case scenarios.
AI-assisted modeling requires a different skill set. Analysts need to write precise prompts that specify constraints. They need to test AI outputs against known benchmarks. They need to iterate on errors without restarting from scratch. These are not Excel skills—they're specification and verification skills. And most firms don't teach them.
The training gap is not about AI literacy. Analysts understand that AI exists. The gap is about modeling fundamentals: What does "correct" mean in a waterfall calculation? How do you verify that a debt sizing block handles refinancing correctly? What tests should you run before presenting a model to a deal committee?
Firms that solve this gap don't just train analysts to use AI. They train analysts to model with rigor, whether they're using AI, Excel, or pencil and paper. The tool is secondary. The process—decomposition, specification, iteration, verification—is primary. Until firms rebuild training programs around these principles, analysts will continue modeling manually because it's the only workflow they trust.
Fear of AI Errors
The single biggest barrier to AI adoption in real estate modeling is not technical capability. It's risk perception. Analysts fear that AI will make invisible errors—mistakes buried in formula logic that look correct on the surface but produce wrong outputs under specific conditions.
This fear is not irrational. Generic AI does hallucinate formulas. It confidently presents IRR calculations that reference the wrong cash flow cells. It builds sensitivity tables that don't actually link to the base case inputs. An analyst who blindly trusts AI output will present a flawed model to senior leadership, damage their credibility, and potentially cost the firm a deal.
The solution is not to avoid AI. The solution is to build verification frameworks that catch errors before they propagate. This is where the iteration meta-skill becomes non-negotiable. Analysts must treat AI-generated models as drafts, not final outputs. They must run zero tests, cross-check totals, and validate logic against known benchmarks.
But here's the problem: most analysts don't know what tests to run. They check that the IRR formula "looks right," but they don't test whether it handles negative cash flows correctly. They verify that the waterfall distributes the correct total, but they don't test whether the tier thresholds actually correspond to the stated hurdle rates.
The result is a double bind: analysts don't trust AI because they can't verify its outputs, and they can't verify its outputs because they lack the testing frameworks to do so. Firms that break this cycle don't just adopt AI—they build quality assurance processes that make AI-generated models as trustworthy as human-generated ones.
This requires cultural change, not just technical training. It means reframing "AI error" as a testing gap, not a tool failure. It means teaching analysts that iteration is not rework—it's the process by which complex models become production-ready. And it means accepting that the first draft of any model, human or AI, will have errors. The question is not "did AI make a mistake?" The question is "did we catch it before it mattered?"
How the Industry Is Starting to Change
A small number of institutional firms are moving beyond manual modeling, and their approach is instructive. They're not adopting generic AI tools. They're building domain-specific workflows that combine AI generation with structured verification.
Here's what this looks like in practice: An analyst specifies deal parameters—asset type, hold period, capital structure, promote terms—using a constrained input form. AI drafts the model skeleton: tabs, calculation blocks, formula structure. The analyst reviews the draft, identifies errors, and provides feedback through iteration prompts. The AI refines the model, rerunning verification tests at each step. The final output is a human-verified, AI-accelerated model that took hours, not days.
The key innovation is not the AI. It's the iteration framework. Analysts are trained to decompose modeling tasks into testable components. They specify constraints explicitly ("the preferred return must compound quarterly, not annually"). They run standardized verification tests (zero tests, IRR cross-checks, waterfall reconciliation). And they iterate on errors without losing structural coherence.
This is not a 2030 vision. Firms are implementing this now. The analysts using these workflows are not "prompt engineers" or data scientists. They're traditional Excel modelers who learned to apply modeling discipline to AI-generated outputs. The same rigor they would apply to a human-built model—checking formulas, testing edge cases, validating assumptions—they now apply to AI drafts.
The cultural shift is subtle but important: AI is treated as a junior analyst, not a magic button. You don't trust its first output. You review it, test it, and iterate on it until it meets institutional standards. This framing removes the fear. Analysts are not replacing their judgment with AI. They're accelerating the mechanical work so they can spend more time on the judgment calls that actually matter—credit analysis, market assumptions, strategic positioning.
The firms making this transition are not abandoning Excel. They're augmenting it. The final deliverable is still a transparent, auditable spreadsheet that a CFO can review. But the path to building that spreadsheet now involves AI-assisted drafting, structured iteration, and automated verification. The result: faster modeling, fewer errors, and analysts who focus on analysis instead of formula construction.
This is where the industry is headed. Not because AI is better than humans at modeling—it's not, yet—but because the combination of AI generation and human verification is better than manual construction alone. Analysts who learn iteration skills today will model faster and more accurately than peers who cling to manual workflows. Firms that build verification infrastructure today will underwrite more deals with the same headcount. The question is not whether real estate modeling will become AI-assisted. The question is which firms will learn to do it correctly first.