Arborlook
Risk & Response by Arborlook Insights

Methodology

How department pages are built — from spatial analysis to color scales to peer matching.

Department Coverage

Last updated: February 2026

We generate pages for all departments in the NERIS Public dataset that have a valid boundary polygon — approximately 22,000 departments across all 50 states and the District of Columbia. Departments in NERIS without a recorded boundary (roughly 8,000) are excluded because all analysis depends on geography: tract assignment, population calculation, spatial maps, and infrastructure counts all require a department boundary to exist.

Each department's boundary, name, department type, and station locations come directly from NERIS Public and are not modified.

Census Tracts as the Unit of Analysis

All sub-department analysis uses 2020 Census tracts as the geographic unit. Census tracts are designed to be relatively stable, racially and economically homogeneous geographic areas with populations typically between 1,200 and 8,000 people. They are the primary unit for American Community Survey (ACS) data release, which is the source for all demographic metrics.

Using tracts (rather than ZIP codes, counties, or block groups) provides a balance of geographic precision and data reliability. ACS 5-year estimates at the tract level have acceptable margins of error for the metrics we report. Block groups would be more granular but have much higher uncertainty; counties are too coarse to reveal within-jurisdiction variation.

Tract-to-Department Assignment

Census tract boundaries do not align with fire department jurisdictional boundaries. A single tract may overlap two or more departments. To assign tracts to departments, we use a largest overlap rule: each tract is assigned to the department whose boundary overlaps it the most, measured by area in square meters.

This means a tract on a jurisdictional boundary may be assigned entirely to one department even though a small portion falls in another. The tradeoff is intentional: this approach avoids double-counting the same population across departments and produces clean, non-overlapping per-department metrics.

Example: A census tract is 70% within Department A's boundary and 30% within Department B's. The tract is assigned entirely to Department A. Its population, demographics, and hazard scores contribute to Department A's aggregated metrics.

Departments with zero assigned tracts (typically covering very small or largely unpopulated areas) are flagged and excluded from metrics that require population data.

Natural Hazard Risk Scores

Hazard scores come directly from FEMA's National Risk Index (NRI), version 1.20. The NRI provides scores for 18 hazard types at the census tract level. Scores range from 0 to 100 and represent relative risk nationally — a score of 80 means the tract is in the top 20% of risk for that hazard, nationally.

National Fixed Color Scale

Hazard map colors use a fixed national scale so they mean the same thing on every department page:

Very Low
0–20
Low
20–40
Medium
40–60
High
60–80
Very High
80–100

This allows meaningful national comparison: a department with a red earthquake tract knows it is in the top 20% of earthquake risk nationally, regardless of where it's located.

Department-Level Score

The single hazard score displayed for a department is a population-weighted average across all assigned tracts: each tract's score is multiplied by its population, summed, then divided by total department population. This weights high-population tracts more heavily than low-population tracts.

18 Hazard Types

HazardNRI Code
AvalancheAVLN
Coastal FloodingCFLD
Cold WaveCWAV
DroughtDRGT
EarthquakeERQK
HailHAIL
Heat WaveHWAV
HurricaneHRCN
Ice StormISTM
LandslideLNDS
LightningLTNG
Riverine FloodingRFLD
Strong WindSWND
TornadoTRND
TsunamiTSUN
Volcanic ActivityVLCN
WildfireWFIR
Winter WeatherWNTW

Demographic & Risk Maps

For demographic, fire risk, and EMS demand maps, we use a different approach than the hazard maps: percentile ranks within the department's own tracts. This answers a different question than the national scale: not "how do we compare nationally?" but "which of my tracts should I prioritize?"

Quintile Breakpoints

For each metric and each department, we calculate the 20th, 40th, 60th, and 80th percentile values across that department's tracts. These become the color breakpoints:

Bottom 20%
Lowest need
20–40th
Below median
40–60th
Median
60–80th
Above median
Top 20%
Highest need

This means the color legend is unique to each department and each metric. A department where every tract has 5–10% mobile home density will still show variation — the tracts at 10% show orange or red, the tracts at 5% show green. This helps chiefs identify which tracts deserve attention first given their specific conditions.

Zero-Inflation Handling

Some metrics have many tracts with a zero value — for example, a dense urban department might have 80% of tracts with zero wood/fuel oil heating. In this case, a naive percentile approach would make those zero-value tracts show as amber or red (because they land in the 60th percentile by rank) even though zero is objectively low risk.

We apply a zero-inflation correction: if 60% or more of a department's tracts have a zero value for a metric, all zero-value tracts are colored green regardless of their rank. Only tracts with a value above zero use the percentile scale. This prevents misleading coloring for metrics that genuinely don't apply to a jurisdiction.

Metrics Included

CategoryMetricSource
Fire Risk% housing built before 1980ACS
Fire Risk% housing built before 1960ACS
Fire Risk% using wood, fuel oil, or coal heatACS
Fire Risk% vacant housing unitsACS
Fire Risk% mobile homesACS
Fire Risk% renter-occupiedACS
EMS Demand% population age 65+ACS
EMS Demand% population age 85+ACS
EMS Demand% with a disabilityACS
EMS Demand% uninsuredACS
EMS Demand% households with no vehicleACS
Demographics% below poverty lineACS
Demographics% with limited English proficiencyACS
DemographicsMedian household incomeACS

Peer Matching

Each department is matched to up to 15 peer departments from the full NERIS Public set. The goal is to identify departments that face similar community conditions — not just similar size — so that comparisons are meaningful.

Step 1: Hard Filters

Candidates must pass all four hard filters to be considered as a peer. Departments that don't match on these dimensions aren't meaningfully comparable:

FilterHow It's Applied
Department typeMust be the same — career, combination, or volunteer (from NERIS)
Community classMust be the same — Urban (≥1,000 people/sq mi), Suburban (500–1,000), or Rural (<500). Based on NFPA 1720 demand zone thresholds.
Census divisionMust be in the same of the 9 U.S. Census divisions (e.g., New England, East North Central, Mountain). Keeps peers geographically relevant — a rural New England department and a rural Mountain department face very different conditions.
PopulationMust be within ±50% of the target department's population

Step 2: Similarity Scoring

Among candidates that pass the hard filters, we compute a weighted similarity score. Each dimension is normalized 0–1 before weighting, so no single variable dominates due to scale differences:

DimensionWeightHow It's Measured
Population served25%Log-scaled total population (log scale compresses differences at large populations)
Population density20%Log-scaled people per square mile of service area
Hazard risk profile15%Population-weighted NRI RISK_SCORE
Elderly population10%% of population age 65+
Poverty rate10%% of population below the federal poverty line
Older housing stock10%% of housing units built before 1970

The 15 candidates with the lowest total distance (highest similarity) become the peer group. Departments with fewer than 15 candidates passing the hard filters will have fewer peers displayed.

Peer comparisons displayed on department pages use department-level averages, not tract-level data. Each peer metric is the population-weighted average across all tracts assigned to that peer department.

Disaster Declarations

Federal disaster declarations come from OpenFEMA. Declarations are matched to departments by county FIPS code — a department is associated with all declarations for the county or counties its boundary overlaps.

Statewide declarations (where the county code is "000") are excluded, as these cover entire states and don't indicate localized impact. All remaining declarations from 1959 through the current year are included in the count; the most recent 10 years are highlighted on the department page.

Because matching is county-based, a declaration appears on a department's page if the declaration covers the county — not necessarily the specific jurisdiction. For most departments this is a reasonable approximation; for very large counties with many departments, the same declarations will appear across all departments in that county.

Critical Infrastructure

Infrastructure counts come from the Homeland Infrastructure Foundation-Level Data (HIFLD) dataset, maintained by DHS/CISA. We include five facility types:

  • Hospitals
  • Nursing homes
  • Public schools
  • Private schools
  • Child care centers

Each facility is spatially joined to department boundaries. A facility is assigned to the department whose boundary contains it. Approximately 85% of facilities match to a department; the remainder are in areas with no corresponding NERIS boundary (very rural, unincorporated, or jurisdictional gaps) and are excluded.

Counts shown on department pages are the number of each facility type within the department's boundary as of the HIFLD dataset vintage. HIFLD data is periodically updated by DHS; we refresh with new releases annually.

Aggregation: How Tract Metrics Become Department Metrics

For percentage metrics (e.g., % poverty, % uninsured), we aggregate by summing raw numerators and denominators across all assigned tracts, then dividing:

dept_pct_poverty = sum(tract_pov_below) / sum(tract_pov_universe)

This is equivalent to asking "what percentage of the people in this department's service area are below the poverty line?" — a straightforward interpretation. We do not use weighted averages of percentages, which can produce misleading results when tract population sizes vary widely.

For continuous metrics (e.g., median household income, NRI scores), we use population-weighted averages: multiply each tract's value by its population, sum, then divide by total department population.