Can AI Build Real Estate Pro Formas in Excel?

Can AI build real estate pro forma Excel? Yes—AI generates complete models with formulas and formatting. Learn current capabilities and how to verify output.

Can AI build real estate pro forma Excel? Yes—AI can generate complete pro forma models in Excel format, including multi-tab structures with operating assumptions, cash flow projections, return calculations, and formatting. The output quality depends on prompt specificity, the AI's understanding of financial logic, and whether it can produce actual Excel files versus just formula suggestions.

Relevant Articles

Working Example: Project "Cascade"

To ground this discussion in real numbers, we'll reference a specific deal throughout this article:

ParameterValue
Project NameCascade
Asset Type120-Unit Value-Add Multifamily
LocationAustin, TX
Purchase Price$28,500,000
Total Equity Required$7,125,000 (25%)
Debt$21,375,000 at 6.5%, I/O
Hold Period5 years
Key Constraint3% annual rent growth, $3.2M renovation budget in Year 1-2

This is the type of deal where you need a complete pro forma model—not just formulas in a chat window. The question is whether AI can actually build it.

Current AI Capabilities

AI systems can now generate real estate pro forma models that open in Excel as complete, formatted workbooks. This is not theoretical—as of 2025, several platforms produce multi-tab models with linked formulas, named ranges, and conditional formatting. The capability exists. The question is how reliably it works and what you need to know to use it correctly.

The core technology relies on Large Language Models (LLMs) that have been trained on spreadsheet structures and financial modeling conventions. These models understand that a pro forma requires distinct calculation blocks: operating assumptions, revenue projections, operating expenses, debt service, capital expenditures, and return metrics. They can structure these blocks across tabs, link them with formulas, and format the output for analyst review.

For Project Cascade, AI can generate a model that calculates five years of monthly rent revenue based on 120 units at an average starting rent of $1,425/unit, escalating at 3% annually. It can build a debt service schedule for $21,375,000 at 6.5% interest-only. It can structure a renovation budget of $3.2M phased across Years 1 and 2. The model will produce cash flow projections, calculate equity multiple and IRR, and organize these calculations into separate tabs with proper cell references.

The technical foundation has advanced significantly. Where ChatGPT or Claude can only suggest formulas in text format, specialized systems can now output the actual Excel file. This distinction matters. Text-based suggestions require manual copy-paste and error-checking. File-based output gives you a working model immediately, even if you still need to verify the logic. The verification step is non-negotiable, but you start from a complete structure rather than an empty spreadsheet.

Most current AI modeling tools use one of two approaches: they either generate the Excel file programmatically (writing the XML structure that comprises an .xlsx file), or they use Excel's COM interface to build the file cell-by-cell. Both methods work. The programmatic approach tends to be faster and can handle larger models without performance issues. The COM approach gives more granular control over formatting but can be slower for complex models. For practical purposes, the output is the same—you receive an Excel file you can open, edit, and use.

What AI Can Build Today

AI can construct the full architecture of a real estate pro forma: separate tabs for inputs, operating assumptions, revenue, expenses, capital expenditures, debt, and returns. It can link these tabs with formulas so that changing an assumption in one location updates downstream calculations. It can apply basic formatting like number formats, table borders, and color coding for inputs versus calculations.

For Project Cascade, an AI-generated model would include an Assumptions tab containing the 120-unit count, $28.5M purchase price, $1,425 starting rent, 3% rent growth, 6.5% interest rate, and $3.2M renovation budget. A Revenue tab would calculate gross potential rent as =120 * 1425 * 12 in Year 1, then multiply by (1+3%)^n for subsequent years. An Expenses tab would list line items for management fees, repairs and maintenance, taxes, insurance, and utilities, typically expressed as per-unit or percentage-of-revenue assumptions. A Debt tab would calculate interest-only payments as =$21,375,000 * 6.5% / 12 for 60 months. A CapEx tab would phase the $3.2M renovation budget, allocating $2M in Year 1 and $1.2M in Year 2.

The Returns tab would consolidate these streams into net cash flow, then calculate equity multiple as (total cash distributed + exit proceeds) / $7,125,000 initial equity. It would calculate IRR using Excel's XIRR function applied to the cash flow series. These are not placeholder formulas—AI can write the actual cell references that link back to the source calculations.

AI can also handle structural complexity. Multi-tier waterfall models with preferred returns, IRR hurdles, and GP promote tiers are within current capabilities. Sensitivity tables that vary two inputs (say, exit cap rate and rent growth) and display the resulting IRR matrix can be generated. Most systems can build these using Excel's data table functionality or by directly calculating each cell in the matrix.

What AI struggles with is ambiguity. If your prompt says "build a pro forma for Project Cascade," the AI must infer what level of detail you want, what expense categories to include, whether you want monthly or annual granularity, and whether the model should include refinance scenarios or just hold-and-sell. The more you specify, the better the output. A prompt that states "monthly cash flow for 60 months, expense categories for multifamily (management, repairs, taxes, insurance, utilities, marketing), no refinance, exit in Year 5 at 5.5% cap rate" will produce a more accurate model than a vague request.

The quality difference between a generic prompt and a specific prompt is not subtle. A vague prompt might produce a model with placeholder values or missing expense categories. A specific prompt produces a model that reflects your actual deal parameters. The AI is not guessing what you want—it is building what you describe. Treat the prompt like a specification document, not a casual request. For How to Get AI to Build Excel Models, specificity is the primary variable you control.

Accuracy and Reliability

AI-generated models are structurally sound more often than they are logically perfect. The formulas usually reference the correct cells, the tabs link properly, and the model opens without errors. Where problems occur is in the financial logic: incorrect escalation formulas, misapplied expense percentages, or wrong IRR calculations due to timing errors.

In our review of AI-generated models across multiple platforms, we see consistent accuracy in basic arithmetic and cell referencing. AI is good at writing =B5 * C5 when that is the correct formula. It is less reliable at knowing whether an expense should escalate at CPI, at rent growth, or remain flat. That decision requires domain knowledge, and AI applies defaults that may not match your market or asset class.

For Project Cascade, an AI model might correctly calculate Year 1 gross rent as 120 units * $1,425/month * 12 months = $2,052,000. It will correctly escalate this by 3% in Year 2 to reach $2,113,560. But if you did not specify that property taxes escalate at 2.5% annually, the AI might hold them flat, or escalate them at the same 3% rent growth rate, or apply a different assumption based on its training data. None of these is "wrong" in a formulaic sense—the formula =prior_year * (1+growth_rate) is correct. The error is in choosing the growth rate.

This is where Verification becomes the required meta-skill. You do not accept AI output as final. You run tests to confirm the logic. The simplest test: does the model balance? For a sources-and-uses statement, do total sources equal total uses? For a cash flow waterfall, does the sum of LP and GP distributions equal total distributable cash? These are "Zero Tests"—calculations that should equal zero if the model is logically consistent. If the test returns a non-zero value, you have an error.

A second verification method: directional sense checks. If you increase rent growth from 3% to 4%, does IRR increase? It should. If it does not, something in the model is broken—perhaps the exit valuation is not linked to Year 5 NOI, or the cash flow is not updating downstream. These sense checks catch errors that do not show up as #REF! or #VALUE! errors. The formulas work, but they calculate the wrong thing.

A third method: compare AI output to a prior model you trust. For Project Cascade, if you have previously modeled a similar 120-unit multifamily deal, compare the expense assumptions line by line. Are management fees 3% of revenue in both models, or did AI default to 4%? Are repairs and maintenance $500 per unit annually, or did AI assume $400? These differences compound over a 5-year hold period and can shift IRR by 100-200 basis points. The AI is not intentionally wrong—it is using different defaults than you would.

Error rates vary by model complexity. Simple single-tenant NNN lease models are nearly error-free. Multifamily value-add models with renovation budgets and lease-up curves have a higher error rate, typically in how the lease-up curve affects revenue and whether renovation costs are correctly phased. Waterfall models with multi-tier promote structures have the highest error rate, particularly in catch-up provisions and lookback IRR calculations. The more complex the logic, the more likely the AI misinterprets a subtle structural requirement.

Limitations to Know

AI cannot build what it does not understand from your prompt. If you do not specify a renovation budget, the model will not include one. If you mention "value-add" but do not define what that means numerically, AI will guess—and its guess will likely be generic. For Project Cascade, "value-add" means $3.2M in renovations over two years, targeting a $300/month rent premium on 80 of the 120 units after renovation. Without that level of detail, AI might assume a different scope.

AI does not know market-specific conventions unless you state them. Multifamily expense assumptions in Austin differ from those in New York. Property tax rates, insurance costs, utility expenses, and management fee norms are market-dependent. AI uses aggregated training data, which means it applies average assumptions unless you override them. If Austin multifamily properties typically run 42% expense ratios, but AI defaults to 38%, your NOI projection will be overstated by roughly $82,000 annually. Over five years, that error propagates into exit value and IRR.

AI does not handle exceptions well. If Project Cascade has a 12-month interest-only period followed by amortization, you must specify that. If there is a lease-up assumption for renovated units (say, 3 months per unit), you must define it. If the GP receives an acquisition fee at closing and an asset management fee annually, you must include those terms. AI will not infer these details from context. It builds what you describe, not what you intend.

Another limitation: AI-generated models lack scenario analysis by default. If you want to see IRR under different exit cap rate assumptions, you must request a sensitivity table. If you want to model a refinance in Year 3, you must specify the refinance terms, timing, and cash-out amount. AI does not proactively build optionality into the model. It builds the base case you describe.

AI also does not enforce institutional modeling standards. It will not automatically separate inputs, calculations, and outputs into distinct tabs unless you instruct it to. It will not use consistent naming conventions for cells and ranges unless you specify them. It will not apply the color-coding schemes (blue for inputs, black for formulas) that most institutional models use. These are workflow preferences, not logic requirements, but they matter when multiple analysts review or edit the model. If you want these conventions, include them in your prompt.

Finally, AI does not document its assumptions. A human analyst includes a sources note citing where the cap rate assumption came from, or an explanation of why expenses escalate at 2.5% instead of 3%. AI does not add this commentary unless you explicitly request it. The model will calculate correctly based on the inputs you provided, but six months later, you may not remember why you used a 5.5% exit cap rate instead of 5.75%. Build assumption documentation into your prompt if you need it.

Reviewing AI Output

Open the model. Check that it contains the tabs and structure you expected. Verify that the Assumptions tab includes all inputs from your prompt. Confirm that the Revenue tab calculates gross potential rent correctly for Year 1, then check that the growth formula applies to subsequent years. Trace one expense line item from assumption to calculation to cash flow to ensure the links work. Check the debt service calculation against a simple interest-only or amortization formula outside the model. Verify that total cash flow equals revenue minus expenses minus debt service minus capital expenditures.

For Project Cascade, start with the Revenue tab. Year 1 rent should be 120 units * $1,425/month * 12 months = $2,052,000. Year 2 should be $2,113,560 (a 3% increase). If those numbers are correct, move to Expenses. Management fees should be $2,052,000 * 3% = $61,560 in Year 1, assuming 3% of revenue. Repairs and maintenance should be 120 units * $500/unit = $60,000 if that is your assumption, and it should escalate at 2.5% annually if that is your convention. Check two or three line items in detail. If they are correct, the rest are likely correct as well, but scan the full list for obvious errors.

Check debt service. The model should calculate monthly interest as $21,375,000 * 6.5% / 12 = $115,781.25. Over 12 months, that is $1,389,375 in Year 1. If the model shows a different number, trace the formula to find the error. Common mistakes: incorrect interest rate (6.5% entered as 0.65 instead of 0.065), incorrect periodicity (annual rate not divided by 12), or incorrect principal balance (using purchase price instead of loan amount).

Check the capital expenditures tab. Renovation costs should total $3.2M, phased as $2M in Year 1 and $1.2M in Year 2. If the model spreads this differently, or totals to a different amount, the AI misunderstood the prompt. Correct it manually or regenerate the model with a clearer prompt.

Check the Returns tab. Equity multiple should be calculated as (cumulative cash distributed + exit proceeds) / $7,125,000. IRR should use the XIRR function with actual dates, not the IRR function with period numbers, because XIRR accounts for the timing of cash flows more accurately. Exit proceeds should be calculated as Year 5 NOI / exit cap rate, minus remaining debt, minus closing costs. For Project Cascade, if Year 5 NOI is $1,680,000 and the exit cap rate is 5.5%, the gross sale price is $30,545,455. Subtract $21,375,000 debt (assuming interest-only, so no paydown) and 2% closing costs ($610,909), leaving net proceeds of $8,559,546. Add cumulative cash distributions over five years, divide by $7,125,000 equity, and you have the equity multiple. If the model shows a materially different number, trace the calculation to find where the error occurred.

Run a Zero Test on the waterfall if applicable. Total LP distributions plus total GP distributions should equal total distributable cash. If the test shows a $1,000 discrepancy, you have a rounding error or a formula error. Find it and fix it. In our models, we insert a row at the bottom of the waterfall tab labeled "Balance Check" with the formula =Total_Cash - LP_Total - GP_Total. If this cell is not zero, the model is wrong.

Finally, stress-test the model. Change one input (say, rent growth from 3% to 4%) and confirm that IRR increases. Change exit cap rate from 5.5% to 6.0% and confirm that IRR decreases. These directional checks catch structural errors that do not show up as #REF! errors. If increasing rent growth decreases IRR, something is fundamentally wrong with the model logic.

This verification process takes 15-30 minutes for a standard pro forma. That is faster than building the model from scratch, which typically takes 2-3 hours. But it is not instantaneous. AI reduces modeling time by 80-90%, not by 100%. The time saved is in construction, not in review. You still need to understand what the model is calculating and whether it is correct. For guidance on this process, see AI Financial Modeling Excel.

The State of the Technology

As of early 2025, AI can build real estate pro forma models that are structurally complete and mostly accurate. The technology is production-ready for standard deals with well-defined parameters. It is not yet reliable for edge cases, highly customized structures, or models that require domain-specific judgment calls.

The progression over the past two years has been significant. In 2023, AI could suggest formulas but not generate files. In 2024, file generation became possible but error-prone. In 2025, error rates have dropped to the point where AI-generated models are usable after verification. The trend is toward higher reliability, more sophisticated logic handling, and better understanding of financial modeling conventions.

We expect continued improvement in three areas. First, better handling of ambiguity—AI will infer reasonable defaults when prompts are incomplete, rather than producing generic placeholders. Second, integration with data sources—models will pull market assumptions (cap rates, expense ratios, rent growth forecasts) from real-time databases rather than relying on prompt inputs. Third, automated verification—AI will run its own Zero Tests and flag potential errors before delivering the model, reducing the analyst's review burden.

The current state is sufficient for most institutional use cases, provided the user understands the verification requirement. AI is a modeling assistant, not a replacement for financial judgment. It accelerates the mechanical work of building the spreadsheet structure, allowing the analyst to focus on the logic, assumptions, and interpretation of results. That is a meaningful productivity gain, and it is available now. The question is not "can AI do this?"—it can. The question is "can you verify what AI produces?"—and that requires modeling literacy.

For analysts who understand pro forma logic, AI is a significant time saver. For those who do not, AI will produce output they cannot validate, which is worse than starting from scratch. The tool is powerful, but it assumes competence in the user. If you cannot spot an error in an escalation formula or an incorrect IRR calculation, AI-generated models are not yet safe for you to use. Learn the underlying financial modeling principles first, then use AI to accelerate execution. That sequence matters.

/ APERS

The End-to-End Automation System for
Real Estate Capital

Unifying your deals, workflows, strategies, and knowledge into one autonomous system.
Enterprise Sales
Start for free