What is Downscaling?

Bridging the gap between the global scale of climate models and the local scale of decision-making.

The Scale Mismatch

Global Climate Models (GCMs) are the primary tools used to simulate the response of the Earth's climate system to increasing greenhouse gases. However, due to the computational cost required to simulate global atmosphere, ocean, and land-surface interactions, these model simulations typically operate on a coarse horizontal grid, often with cells ranging from 100km to 250km on a side. To give a sense of scale and why this might be a limitiation for planning, a single grid cell might encompass the entire area of a U.S. National Park that a planner is trying develop a hazard risk assessment for. This means the GCM would not resolve mountain peaks vs. valleys and rivers in the park and would therefore not provide specific enough information for an assessment of ecological risk at local scales. Downscaling is the suite of techniques used to translate the coarse-scale global climate model simulation into high-resolution data that reflects local topography, hydrology, and weather patterns.

Two Paths: Statistical vs. Dynamical

There are two fundamentally different approaches to solving this problem that are commonly applied, each with its own philosophy and limitations:

Statistical Downscaling

"The Efficient Translator"

This method builds a mathematical relationship between historical large-scale weather patterns (from the GCM) and historical local observations from weather stations, modeled meteorology, and reanalysis. These methods often assume that these relationships will remain constant in the future (an assumption often refered to as "stationarity"). Because it is computationally efficient, it allows scientists to downscale dozens of global models to create large ensembles of high-resolution projections, with spatial resolution usually ranging from 4-12km.

Dynamical Downscaling

"The Physical Simulator"

This method nests a high-resolution Regional Climate Model (RCM) inside the global model. It uses the GCM's output as boundary conditions and then physically solves the equations of fluid dynamics at a much higher spatial resolution and often with a more "realistic" representation of the land-surface and hydrology. This allows the final data product to better capture complex, non-linear feedbacks (like snow-albedo or convective storms) but with the drawback of being much more computationally expensive to run.

Compare the Datasets

Different downscaling methods are optimized for to capture different components of the climate system more effectively. Click the tabs below to explore the motivation and mechanics behind the three most common datasets used in adaptation planning.

Localized Constructed Analogs (LOCA)

Statistical
Motivation for the Method

The primary motivation behind LOCA was to correct a specific deficiency in earlier statistical methods (like BCSD) which tended to "smooth out" daily weather events, resulting in unrealistic "drizzle" and muted extremes. LOCA was designed to preserve the daily sequence and intensity of weather events by using actual historical days as "analogs" or building blocks for the future projection. This makes it the preferred statistical dataset for analyzing changes in extreme events.

✓ Key Strengths

Preservation of daily extreme precipitation and heat events. Widely used in the U.S. National Climate Assessment (NCA5), making it a "standard" for federal consistency.

⚠ Limitations

Does not explicitly preserve the multivariate relationship between variables (e.g., humidity and temperature) as strictly as other approaches. Relies on stationarity of historical patterns which may not be realistic in a rapidly warming climate.

Multivariate Adaptive Constructed Analogs (MACA)

Statistical
Motivation for the Method

The motivation for MACA was to support complex ecological and hydrological modeling where the interaction between variables should be considered. For example, fire danger depends on high temperature, low humidity, and wind occurring simultaneously. MACA constructs the future climate by matching patterns across multiple variables at once, ensuring that the physical consistency between temperature, dewpoint, and wind fields is preserved from the GCM.

✓ Key Strengths

Preserves the joint structure of weather variables, making it ideal for fire weather indices, ecological modeling and impact assessment, and hydrology.

⚠ Limitations

Can sometimes introduce spatial artifacts in complex terrain where the joint probability of variables is difficult to resolve.

UCLA WRF CMIP6

Dynamical
Motivation for the Method

The UCLA WRF simulations were motivated by the failure of statistical methods to capture non-stationary physical feedbacks in the complex topography of the Western U.S. In a warming world, the way snow reflects sunlight (albedo) changes as the snow melts, and the way wind moves over mountains changes as the atmosphere stabilizes. Statistical methods cannot predict these physical changes because they are not explicitly captured in the historical record. By running a full physics engine (WRF) at 3km to 9km resolution, this dataset explicitly simulates these interactions.

✓ Key Strengths

Captures novel physical feedbacks (e.g., snow-albedo). Resolves complex mountain-valley wind systems crucial for understanding future changes to hazards like wilfire and complex terrain extreme precipitation.

⚠ Limitations

Extremely computationally expensive, resulting in a smaller ensemble size (fewer models) compared to statistical methods. Limited spatial coverage (typically Western U.S. only).

2. The "DNA" of Training Data

Statistical downscaling models must be "trained" on historical observational datasets to learn the local climate patterns. This means the downscaled product effectively inherits the "DNA" of the observational dataset used. If the training data has biases—such as undercounting snow in high mountains due to a lack of gauge stations—the future projection will carry that same flaw forward.

PRISM

Widely used in the U.S., PRISM uses a regression based on elevation to estimate climate between weather stations. While this performs reasonably well in complex, data-sparse terrain (like high alpine zones), it relies heavily on interpolation, which becomes more uncertain in places with a lower density of stations.

Daymet

Another common gridded observation set. Research indicates that Daymet and PRISM can disagree significantly on historical temperature and precipitation trends in the complex terrain of the Western U.S. This disagreement creates an uncertainty that persists into the future projections.

Livneh

Often used for training older CMIP5 downscaled data (like the original LOCA). Previous studies have identified a "drizzle bias" in this dataset in certain regions, where it records too many days with trace precipitation compared to station data, however this problem is corrected downscaled data trained using the updated 2021 version of this dataset (now implemented in LOCA2).

Regional Considerations Checklist

Climate challenges are not uniform across the continent. Use the tabs below to explore specific biases and modeling challenges identified in recent literature for your region of interest.

Cross-cutting Challenges

Key Challenges & Considerations
  • Irreducible Uncertainty: Natural climate variability often masks forced trends in the near-term (0–20 years).
  • Non-stationarity: Historical statistical relationships (used in downscaling) may break down under future warming (e.g., snow-albedo feedbacks, vegetation shifts).
  • Compound & Extreme Events: Models often underestimate the severity of compounding drivers (e.g., concurrent drought and heatwaves).
Guidance for Users
  • Avoid reliance on mean changes alone: Focus on changes in extremes, thresholds, seasonality, and variability.
  • Prioritize process-based credibility: Give greater weight to projections supported by physical understanding.
  • Apply scenario-based stress testing: Complement probabilistic projections with high-impact scenarios.
  • Explicitly acknowledge uncertainty: Distinguish between reducible (model resolution) and inherent (variability) uncertainty.

Alaska

Key Challenges & Considerations
  • Data Sparsity: Evaluating model performance is limited by scarce observations.
  • Cryosphere Complexity: Sea ice, permafrost, and glacier processes are crudely represented in GCMs but strongly influence local climate.
  • Model Selection: Useful models for bracketing temperature changes may not bracket key snow or ecological changes.
  • Tipping Points: Abrupt thaw of permafrost and ecosystem regime shifts are poorly represented.
Guidance for Users
  • Understand that uncertainty ranges are large and model evaluation is inherently challenging here.
  • Be cautious when applying downscaled products for cryosphere-impacted processes.
  • Collaborate with Alaska climate services experts (e.g., Alaska CASC) to evaluate specific use cases.

Intermountain West

Key Challenges & Considerations
  • Snowpack Uncertainty: Major challenges in simulating Snow Water Equivalent (SWE) magnitude and melt timing. GCM temperature biases amplify this.
  • Large Disagreements: Projected snow loss varies significantly between products.
  • Orographic Effects: Warming may alter the stationarity of mountain precipitation enhancement.
Guidance for Users
  • Use ensembles that include multiple downscaling methods to capture snowpack uncertainty.
  • For water supply planning, consider high-resolution dynamical downscaled data where available in addition to statistical datasets.

Southwest US

Key Challenges & Considerations
  • North American Monsoon: High model divergence in projecting monsoon strength and timing. Statistical downscaling may not capture dynamical shifts.
  • Aridification vs. Drought: Distinguishing between long-term drying trends and distinct drought events is critical.
  • Hot Models: High-sensitivity CMIP6 models may project excessive warming, exacerbating evaporative demand errors.
Guidance for Users
  • Verify that selected models reasonably simulate the monsoon historically.
  • Use multiple downscaling methods to assess the range of summer precipitation uncertainty.
  • Carefully assess evaporative demand projections, noting that "hot models" may outlier results.

Great Plains

Key Challenges & Considerations
  • Convective Precipitation: "Drizzle bias" is common; models miss the intensity of Mesoscale Convective Systems (thunderstorms).
  • Low-Level Jet: Biases in the Great Plains Low-Level Jet affect nocturnal precipitation transport.
  • Irrigation Feedbacks: Most GCMs lack representation of irrigation, missing its cooling/wetting feedback on local climate.
Guidance for Users
  • Caution is required for flood analysis or hazards dependent on high-intensity short-duration rainfall.
  • LOCA2 is generally preferred over BCSD for preserving daily extremes, but biases persist.
  • Consider the potential influence of land-use feedbacks (agriculture) not captured in models.

Northeast

Key Challenges & Considerations
  • Extreme Precipitation: The region has seen large observed increases in heavy rain; models struggle to capture the full magnitude of this intensification.
  • Atlantic Modes: Natural variability (NAO, AMO) strongly influences decadal wet/dry phases, complicating trend detection.
Guidance for Users
  • Prioritize datasets that excel at capturing daily extremes (e.g., LOCA2).
  • Consider the role of internal variability when interpreting near-term projections.

Southeast

Key Challenges & Considerations
  • Warm/Dry Bias: A prevalent warm/dry bias exists in many GCMs. Statistical downscaling training data selection can either correct or exacerbate this.
  • Tropical Cyclones: Standard GCMs/downscaling do not resolve hurricane intensity or tracks accurately.
  • Land-Atmosphere Coupling: Soil moisture-precipitation feedbacks are strong and difficult to model.
Guidance for Users
  • Investigate parent GCM performance to avoid unrealistic drying trends.
  • For hurricane risks, do not rely on standard downscaled precip/wind; use specialized synthetic track modeling or high-res dynamical data.

West Coast

Key Challenges & Considerations
  • Atmospheric Rivers (ARs): Determining water year outcomes. GCMs struggle with AR landfalling location and magnitude.
  • Topographic Complexity: Coastal ranges vs. inland valleys. GCMs flatten these features.
  • Marine Layer: Coastal fog/stratus processes are rarely captured, affecting coastal temperature projections.
Guidance for Users
  • Evaluate historical simulation of ARs.
  • For wind/fire/snow questions, consider dynamical downscaling.
  • Be aware that coastal microclimates may be poorly represented in statistical products.

Hawai'i & Pacific Islands

Key Challenges & Considerations
  • Island Scale: Islands are often smaller than a single GCM grid cell ("sub-grid scale").
  • Trade Winds & ENSO: Local rainfall is dominated by trade wind interactions with topography, which coarse models miss.
Guidance for Users
  • Standard global statistical downscaling is often insufficient.
  • Prioritize dynamical downscaling (WRF) or statistical methods specifically developed for island topography.
  • Consult local experts (Pacific RISA / PI-CASC).