How to Build Sensitivity Tables in Excel

Build sensitivity tables excel ai: Create one-way and two-way data tables that stress test acquisition models. Includes variable selection, range specification, and output interpretation.

Build sensitivity tables excel ai methods allow you to systematically test how changes in key assumptions affect your model's outputs by creating data tables that display results across multiple variable values simultaneously. This technique transforms static financial models into dynamic stress-testing tools that reveal which assumptions drive your conclusions and where your deal logic is most vulnerable to market changes.

Relevant Articles

Working Example: Project "Clearwater"

To see sensitivity analysis in action, let's model a specific acquisition:

ParameterValue
Asset TypeClass B Office Building
LocationTampa, FL
Purchase Price$18,500,000
Current Occupancy72%
Pro Forma Occupancy88%
Going-In Cap Rate7.2%
Exit Cap Rate6.8%
Hold Period5 years
Annual Rent Growth2.5%
Base Case Equity Multiple1.82x
Base Case IRR16.3%

The sponsor believes this deal works at a 16.3% IRR, but has not tested what happens if rent growth slows to 1.5% or if the exit cap rate rises to 7.5%. Sensitivity tables reveal exactly how sensitive this 16.3% conclusion is to those assumptions.

What Sensitivity Analysis Shows

Sensitivity tables expose which variables control your model's outcomes. In the Clearwater deal, the base model shows a 16.3% IRR, but that number is meaningless without context about its stability. If a 50-basis-point change in exit cap rate drops the IRR to 11%, your investment thesis is fragile. If the same change only moves IRR to 15.1%, the deal has cushion.

Most financial models present single-point forecasts as if they were facts. "The project delivers a 16.3% IRR" sounds definitive, but it actually means "the project delivers 16.3% if all 47 assumptions happen to be correct simultaneously." Sensitivity analysis shifts the frame from "what is the return" to "under what conditions does the return hold."

When you build sensitivity tables excel ai prompts can generate the structure, but you must define which variables to test. The common mistake is testing everything. Exit cap rate, rent growth, occupancy, capital expenditures, lease-up velocity, tenant improvement costs, free rent periods, expense growth, refinance proceeds, and disposition timing all affect IRR. But testing all ten creates noise, not insight.

The practical approach: identify the two or three variables that have the largest impact on your investment decision. In most acquisition models, these are exit cap rate and a revenue driver like rent growth or stabilized occupancy. Those two variables typically explain 70% to 80% of the variance in IRR outcomes. Test those first. If you need more granularity later, add a third variable in a separate table.

In our Clearwater model, we will test exit cap rate and annual rent growth because the deal depends on both lease-up and a compressed exit cap. If either assumption fails, the 16.3% IRR collapses. The sensitivity table will quantify exactly how much.

This is the Verification meta-skill in action: you are not changing your model logic, you are testing whether the logic holds under different assumptions. Verification prevents you from presenting single-point forecasts as certainties when they are actually conditional on optimistic inputs.

Choosing Variables to Test

Start by asking: "If this variable is wrong, does the deal break?" In the Clearwater acquisition, the sponsor assumed a 6.8% exit cap rate based on comparable sales from Q2 2024. But cap rates are volatile. If the exit cap rate rises to 7.5% by Year 5, the sale price drops by roughly $1.1 million, which directly reduces equity proceeds and IRR.

Rent growth is the second variable. The model assumes 2.5% annual rent escalations, but Tampa office rents grew only 1.2% in 2023. If that trend continues, stabilized NOI will be lower than projected, which reduces both operating cash flow and exit value. These two variables—exit cap rate and rent growth—drive the deal outcome.

Here is how to select variables systematically:

First, run a quick manual test. Change one assumption by 20%, recalculate IRR, and note the difference. If a 20% change in a variable moves IRR by less than 100 basis points, that variable is not a priority. If the same change moves IRR by 300 basis points or more, that variable is a driver. Test the drivers.

Second, prioritize variables where you have the least certainty. If your lease comps show tight ranges for market rents, rent growth is relatively stable. If your exit cap rate depends on correctly predicting interest rate policy five years from now, exit cap rate is uncertain. Test the uncertain inputs.

Third, avoid testing variables that are mechanically linked. If you test both "NOI" and "exit cap rate," you are double-testing the same outcome because sale price is NOI divided by exit cap rate. Instead, test exit cap rate and the revenue component that drives NOI, like rent growth or stabilized occupancy. This isolates the two independent assumptions.

In the Clearwater model, we will build two sensitivity tables:

  1. One-way table: Exit cap rate (ranging from 6.3% to 7.8%)
  2. Two-way table: Exit cap rate (6.3% to 7.8%) and annual rent growth (1.0% to 4.0%)

This structure tests the primary risk (exit cap rate) in isolation, then tests the interaction between exit cap rate and rent growth. You can see both the individual effect and the combined effect.

When working with AI to build these tables, specify the exact variable names and cell references from your model. If your exit cap rate is in cell Assumptions!C12, state that in your prompt. AI cannot infer which cell contains which assumption. Explicit reference prevents AI from testing the wrong input or linking the table to an empty cell.

Verification (Stress Testing)

Sensitivity tables are not just analytical tools—they are verification mechanisms. When you build a financial model, you make dozens of small logic decisions: which expenses escalate, which leases renew, how capital expenditures are timed. These decisions are invisible in a single-point forecast, but they become visible under stress.

For example, if you test exit cap rate in the Clearwater model and discover that IRR becomes negative at a 7.8% cap rate, you should investigate why. A negative IRR usually signals that the model is returning less equity at exit than was invested. That could mean the sale price is too low, or it could mean your Year 5 NOI projection is wrong. Open the model, trace the calculation, and confirm that all logic is correct before presenting the sensitivity results.

This is why we treat sensitivity analysis as part of Verification, not as a separate output exercise. The sensitivity table often reveals errors that were masked in the base case. When you test ranges instead of single points, formula mistakes become obvious. If your IRR jumps from 16% to 45% when you change rent growth from 2.5% to 3.0%, something is broken. No 50-basis-point change in rent growth produces a 2,900-basis-point change in IRR unless your model contains circular logic or a hardcoded override.

Run these checks after building any sensitivity table:

  1. Monotonicity check: As you increase a favorable variable (like rent growth), does the output improve monotonically, or does it jump around? If IRR is 14% at 2.0% rent growth, 18% at 2.5% rent growth, and 12% at 3.0% rent growth, your model has an error. Outputs should move consistently in the expected direction.
  2. Magnitude check: Do the changes in output match the scale of changes in input? If you double rent growth from 2% to 4%, does IRR roughly double, or does it increase by 5,000 basis points? Large jumps suggest that a formula is multiplying instead of adding, or that a cell reference is incorrect.
  3. Boundary check: Test extreme values. Set exit cap rate to 15%. Set rent growth to -5%. The model should return logical results (likely a very low or negative IRR) without producing errors, circular references, or division-by-zero crashes. If the model breaks under extreme inputs, your formulas are not robust.

After running these checks, you can trust the sensitivity table. If the table shows that IRR ranges from 12.1% to 20.4% across realistic exit cap rate assumptions, you know the deal has moderate sensitivity—neither bulletproof nor fragile. That is actionable information for an investment committee.

For more on stress testing logic, see our guide on [How to Verify Excel Models with AI].

One-Way vs Two-Way Tables

One-way sensitivity tables test a single variable. Two-way tables test two variables simultaneously. Both are useful, but they answer different questions.

A one-way table isolates a single assumption. In the Clearwater model, we can build a one-way table that shows IRR across seven exit cap rate scenarios: 6.3%, 6.5%, 6.8%, 7.0%, 7.3%, 7.5%, and 7.8%. This table has one input column (exit cap rate) and one output column (IRR). It is simple to read and easy to explain to an investment committee.

Here is what the one-way table structure looks like in Excel:

Exit Cap RateIRR
6.3%20.4%
6.5%18.6%
6.8% (Base)16.3%
7.0%14.7%
7.3%12.8%
7.5%11.5%
7.8%9.8%

This table shows that a 50-basis-point increase in exit cap rate from the base case (6.8% to 7.3%) reduces IRR by 340 basis points. That is meaningful sensitivity. If the market shifts modestly against the deal, returns drop below the 15% hurdle.

A two-way table adds a second variable. In the Clearwater model, we can test exit cap rate on one axis and annual rent growth on the other axis. This creates a grid where each cell shows the IRR for a specific combination of assumptions. Two-way tables reveal interactions: maybe the deal is resilient to higher cap rates if rent growth is strong, but fragile if both variables move against you simultaneously.

Here is the two-way table structure:

Exit Cap / Rent Growth1.0%1.5%2.0%2.5%3.0%4.0%
6.3%16.8%18.2%19.4%20.4%21.3%23.0%
6.5%15.2%16.5%17.7%18.6%19.5%21.1%
6.8%13.1%14.3%15.4%16.3%17.2%18.7%
7.0%11.7%12.9%13.9%14.7%15.6%17.0%
7.3%9.8%10.9%11.9%12.8%13.6%14.9%
7.5%8.6%9.7%10.7%11.5%12.3%13.6%
7.8%7.0%8.1%9.0%9.8%10.6%11.8%

The two-way table shows that if exit cap rate rises to 7.3% and rent growth falls to 1.5%, IRR drops to 10.9%—well below the target return. But if rent growth exceeds expectations at 3.0% even with a 7.3% exit cap, IRR holds at 13.6%. This is useful information: the deal can survive one unfavorable variable, but not two.

When prompting AI to build these tables, specify the structure explicitly. For a one-way table, state: "Create a data table with exit cap rate in cells A2:A8 ranging from 6.3% to 7.8%, and IRR output in column B, referencing the IRR cell in the summary tab." For a two-way table, state: "Create a data table with exit cap rate in rows (6.3% to 7.8%) and rent growth in columns (1.0% to 4.0%), with IRR output in the grid, referencing the IRR cell."

Excel's native Data Table feature (under Data > What-If Analysis > Data Table) automates this process, but AI can help structure the input ranges and link them to the correct output cell. The key is clarity: tell the AI exactly which cells contain the variables and which cell contains the formula to test.

Specifying Ranges and Increments

The ranges you test determine whether your sensitivity analysis is useful or misleading. If you test exit cap rates from 6.3% to 6.5% in a model with a 6.8% base case, you are only testing upside scenarios. If you test from 6.0% to 10.0%, you are testing scenarios that are not realistic. The goal is to define ranges that reflect plausible market outcomes, not to explore every mathematical possibility.

In the Clearwater model, the base case exit cap rate is 6.8%. This was based on comparable sales from mid-2024, when Tampa office cap rates ranged from 6.5% to 7.2%. To stress-test the assumption, we expand the range by roughly 50 basis points in each direction: from 6.3% (optimistic, assumes further cap rate compression) to 7.8% (pessimistic, assumes rising interest rates push cap rates higher). This 150-basis-point range captures realistic scenarios without venturing into extremes like 5% or 12% cap rates.

The increment matters as well. If you test cap rates at 6.3%, 6.8%, and 7.8%, you only get three data points. That is not enough granularity to understand the sensitivity curve. If you test every 10 basis points (6.3%, 6.4%, 6.5%, etc.), you get 16 data points, which is excessive for a presentation table. A 25- to 50-basis-point increment is usually optimal. In our one-way table, we used 25 to 30 basis points: 6.3%, 6.5%, 6.8%, 7.0%, 7.3%, 7.5%, 7.8%. This gives seven scenarios, which is enough to see the trend without overwhelming the reader.

For rent growth, the base case is 2.5%. Historical Tampa office rent growth has ranged from 0.8% to 3.6% over the past ten years, so we test from 1.0% (below-average growth) to 4.0% (above-average growth). The increment is 50 basis points for some ranges and 100 basis points for others, which keeps the table readable while covering the realistic range.

When prompting AI to specify ranges, provide the reasoning: "Test exit cap rate from 6.3% to 7.8% in 25-basis-point increments, because the base case is 6.8% and comparable sales range from 6.5% to 7.2%. Extend the range by 50 basis points in each direction to capture optimistic and pessimistic scenarios." This prevents AI from defaulting to arbitrary ranges like 1% to 10%.

Also specify how you want the ranges formatted. If your model uses percentages formatted as 6.8% in cells, state that explicitly. If your model uses decimals formatted as 0.068, tell the AI to match that format. Inconsistent formatting causes Excel data tables to fail because the input cell expects one format and receives another.

Finally, test boundary conditions. If you are testing rent growth, include a zero or negative growth scenario to see what happens if rents stagnate or decline. If the model crashes or returns nonsensical results (like a 200% IRR when rent growth is -2%), you have a formula error that needs fixing before you rely on the sensitivity table.

For a deeper look at how to structure model inputs, see our guide on [How to Iterate on Excel Models].

Interpreting Output

Once you build the sensitivity table, you must interpret it correctly. A common mistake is treating the table as a probability distribution. The table shows you what happens under different assumptions, but it does not tell you how likely those assumptions are. If your two-way table shows IRR ranging from 9.8% to 23.0%, that does not mean the expected IRR is the midpoint (16.4%). It means the IRR could be anywhere in that range depending on which assumptions prove correct.

Start by identifying the "no-regret zone"—the range of assumptions where the deal still meets your return threshold. In the Clearwater model, the investment committee requires a 15% IRR. Looking at the two-way table, the deal exceeds 15% IRR as long as exit cap rate stays below 7.0% or rent growth exceeds 2.0%. If both variables move unfavorably (exit cap rate rises to 7.3% and rent growth falls to 1.5%), IRR drops to 10.9%, which fails the hurdle.

This tells you where the deal is robust and where it is fragile. The deal can tolerate moderate cap rate expansion if rent growth is healthy. It can tolerate weak rent growth if cap rates compress. But it cannot tolerate both risks simultaneously. That is the key insight.

Next, look for asymmetry. In the one-way exit cap rate table, a 50-basis-point improvement in cap rate (from 6.8% to 6.3%) adds 410 basis points to IRR, but a 50-basis-point deterioration (from 6.8% to 7.3%) only reduces IRR by 340 basis points. This asymmetry exists because the deal has embedded leverage: when exit value increases, equity returns amplify. The sensitivity table makes this dynamic visible.

Also check for threshold effects. If IRR drops sharply between two adjacent scenarios, investigate why. In some models, IRR collapses at a specific cap rate because the exit proceeds are no longer sufficient to repay the loan, triggering a default scenario. In others, IRR drops sharply because a lease renewal assumption flips from "renews" to "vacates," which cascades through the cash flow. These threshold effects are not obvious in a static base case, but they are immediately visible in a sensitivity table.

Finally, present the results with context. Do not hand an investment committee a table showing IRRs from 9.8% to 23.0% and say "here is the sensitivity analysis." Instead, say: "The base case IRR is 16.3%. The deal meets our 15% hurdle as long as exit cap rates stay below 7.0% or rent growth exceeds 2.0%. If both assumptions move against us—exit cap rate rises to 7.3% and rent growth falls to 1.5%—IRR drops to 10.9%, below our threshold. The primary risk is exit cap rate, which drives 70% of the IRR variance."

That statement gives the investment committee actionable information: the deal is moderately sensitive, the primary risk is exit cap rate, and there is cushion as long as one of the two key assumptions holds.

For more on how to structure model iteration and testing workflows, see our guide on [How to Iterate on Excel Models].

Next Steps

Sensitivity tables are most useful when integrated into a repeatable workflow. After building the Clearwater tables, the next step is to run the same analysis on every acquisition model before presenting to the investment committee. This ensures that all deals are evaluated using consistent stress-testing standards.

To extend this analysis further, consider linking sensitivity tables to scenario planning. Instead of testing individual variables in isolation, build three full scenarios—Base, Downside, and Upside—and use sensitivity tables to quantify the range within each scenario. For example, the Downside scenario might assume exit cap rate of 7.3%, rent growth of 1.5%, and 80% stabilized occupancy. The sensitivity table shows how IRR changes if any one of those assumptions improves or deteriorates further.

For deal-specific guidance on acquisition modeling, see our guide on [How to Build Acquisition Models with AI].

/ APERS

The End-to-End Automation System for
Real Estate Capital

Unifying your deals, workflows, strategies, and knowledge into one autonomous system.
Contact Sales
Start for free