Assessing the Macroeconomic Impacts of Disasters: an Updated Multi-Regional Impact Assessment (MRIA) model
Repository: https://github.com/chirindaopensource/MRIA_model_macro_impact_disasters
Owner: 2025 Craig Chirinda (Open Source Projects)
This repository contains an independent, professional-grade Python implementation of the research methodology from the 2025 paper entitled "Assessing the Macroeconomic Impacts of Disasters: an Updated Multi-Regional Impact Assessment (MRIA) model" by:
- Surender Raj Vanniya Perumal
- Mark Thissen
- Marleen de Ruiter
- Elco E. Koks
The project provides a complete, end-to-end computational framework for evaluating the regional and macroeconomic consequences of disasters. It implements the updated MRIA model, a supply-constrained, multi-regional input-output model solved via linear programming. The goal is to provide a transparent, robust, and computationally efficient toolkit for researchers and policymakers to replicate, validate, and apply the MRIA framework to assess economic resilience and inform disaster risk mitigation strategies.
- Introduction
- Theoretical Background
- Features
- Methodology Implemented
- Core Components (Notebook Structure)
- Key Callable: run_full_experiment
- Prerequisites
- Installation
- Input Data Structure
- Usage
- Output Structure
- Project Structure
- Customization
- Contributing
- License
- Citation
- Acknowledgments
This project provides a Python implementation of the methodologies presented in the 2025 paper "Assessing the Macroeconomic Impacts of Disasters: an Updated Multi-Regional Impact Assessment (MRIA) model." The core of this repository is the iPython Notebook MRIA_model_macro_impact_disasters_draft.ipynb
, which contains a comprehensive suite of functions to replicate the paper's findings, from initial data validation to the final execution of a full suite of robustness checks.
Traditional input-output models often struggle to capture the supply-side shocks and logistical bottlenecks characteristic of disasters. This project implements the updated MRIA framework, which addresses these shortcomings by incorporating production capacity constraints, inter-regional substitution possibilities, and explicit logistical frictions.
This codebase enables users to:
- Rigorously validate and structure a complete set of multi-regional supply-use data.
- Transform tabular economic data into the precise numerical tensors required for optimization.
- Calibrate the model's key behavioral parameter (
alpha
) to ensure its baseline fidelity. - Execute the core three-step optimization algorithm to simulate the post-disaster economic equilibrium.
- Conduct a full suite of analyses presented in the paper:
- Sensitivity Analysis: Explore the trade-offs between production and trade flexibility.
- Criticality Analysis: Identify systemically important economic sectors based on their irreplaceability.
- Incremental Disruption Analysis: Trace the non-linear failure pathways of the economy under escalating stress.
- Execute a full suite of robustness checks to validate the stability of the model's conclusions.
The implemented methods are grounded in input-output economics and linear programming, providing a quantitative framework for simulating economic shocks.
1. Supply-Constrained, Multi-Regional Framework:
The model is built on a multi-regional Supply-and-Use Table (SUT) framework. Unlike traditional demand-driven models, the MRIA model's primary shock is a reduction in production capacity (
2. Three-Step Optimization Algorithm: The post-disaster equilibrium is found by solving a sequence of three linear programs, which reflects a clear hierarchy of economic priorities:
- Step 1: Minimize Rationing. The primary goal is to satisfy as much final demand as possible, minimizing the direct welfare loss to consumers. $$ \min z_1 = \sum_{r,p} v_{r,p} $$
-
Step 2: Minimize Economic Effort. Given the unavoidable level of rationing found in Step 1, the model finds the most efficient (least-cost) way to organize production and trade to meet the remaining demand. The objective function includes a calibrated penalty (
$\alpha$ ) for using new or expanded trade routes. $$ \min z_2 = \sum_{r,s} x_{r,s} + \alpha \sum_{r',r,p} t_{r',r,p} $$ -
Step 3: Quantify Production Equivalent of Rationing. An analytical step to calculate the hypothetical production (
$x'$ ) that would have been needed to satisfy the rationed demand, allowing for a comprehensive impact assessment. $$ \min z_3 = \sum_{r,s} x'_{r,s} $$
3. Comprehensive Impact Assessment:
The total economic impact (
The provided iPython Notebook (MRIA_model_macro_impact_disasters_draft.ipynb
) implements the full research pipeline, including:
- Data Validation Pipeline: A robust, modular system for validating the structure, content, and economic consistency of all input data.
- Rigorous Preprocessing: A deterministic pipeline for transforming tabular data into the precise
numpy
tensors required by the optimization engine. - Correct and Remediated Core Engine: An accurate and professional-grade implementation of the three-step optimization algorithm using the
gurobipy
library, including a robust calibration routine. - Automated Analysis Orchestrators: High-level functions that automate the execution of the Sensitivity, Criticality, and Incremental Disruption analyses with a single call.
- Comprehensive Robustness Suite: A full suite of advanced robustness checks to analyze the framework's sensitivity to parameters, data uncertainty (via Monte Carlo), and methodological choices.
- Full Research Lifecycle: The codebase covers the entire research process from data ingestion to final, validated results, providing a complete and transparent replication package.
The core analytical steps directly implement the methodology from the paper:
- Input Data Validation (Task 1): The pipeline ingests four
pandas
DataFrames and a configuration dictionary, and rigorously validates their integrity. - Data Preprocessing (Task 2): It transforms the validated data into a structured set of
numpy
arrays and index mappings. - Model Calibration (Task 3): It systematically calibrates the
alpha
trade cost parameter to ensure the model replicates the baseline economy. - Core MRIA Algorithm (Task 4): It implements the central three-step optimization algorithm for a single disaster scenario.
- Sensitivity Analysis (Task 5): It executes the core algorithm across a grid of 18 parameter settings to analyze resilience trade-offs.
- Criticality Analysis (Task 6): It executes hundreds of simulations to stress-test each economic sector individually and calculate its systemic importance.
- Incremental Disruption Analysis (Task 7): It executes dozens of simulations to trace the economy's non-linear response to escalating shocks.
- Orchestration & Robustness (Tasks 8-9): Master functions orchestrate the main pipeline and the optional, full suite of robustness checks.
The MRIA_model_macro_impact_disasters_draft.ipynb
notebook is structured as a logical pipeline with modular orchestrator functions for each of the 9 major tasks.
The central function in this project is run_full_experiment
. It orchestrates the entire analytical workflow, providing a single entry point for running the main analyses and the advanced robustness checks, all controlled by a single configuration dictionary.
def run_full_experiment(
initial_production: pd.DataFrame,
initial_trade: pd.DataFrame,
supply_coefficients: pd.DataFrame,
use_coefficients: pd.DataFrame,
config: Dict[str, Any]
) -> Dict[str, Any]:
"""
Executes the complete MRIA research study, including primary and robustness analyses.
"""
# ... (implementation is in the notebook)
- Python 3.8+
- A Gurobi license. Gurobi offers free academic licenses. The
gurobipy
package must be installed and the license configured. - Core dependencies:
pandas
,numpy
,scipy
,gurobipy
,tqdm
.
-
Clone the repository:
git clone https://github.com/chirindaopensource/MRIA_model_macro_impact_disasters.git cd MRIA_model_macro_impact_disasters
-
Create and activate a virtual environment (recommended):
python -m venv venv source venv/bin/activate # On Windows, use `venv\Scripts\activate`
-
Install Python dependencies:
pip install pandas numpy scipy "gurobipy>=9.5" tqdm
-
Install and configure Gurobi: Follow the instructions on the Gurobi website to install the Gurobi Optimizer and activate your license.
The pipeline requires four pandas
DataFrames with specific structures, which are rigorously validated by the first task.
initial_production
:MultiIndex('region', 'sector')
, column'production_value'
.initial_trade
:MultiIndex('origin_region', 'dest_region', 'product')
, column'trade_value'
.supply_coefficients
:MultiIndex('region', 'sector', 'product')
, column'coefficient'
.use_coefficients
:MultiIndex('dest_region', 'dest_sector', 'product', 'origin_region')
, column'coefficient'
.
The entire experiment is controlled by a single, comprehensive Python dictionary, config
. A fully specified example is provided in the notebook.
The MRIA_model_macro_impact_disasters_draft.ipynb
notebook provides a complete, step-by-step guide. The core workflow is:
-
Prepare Inputs: Load your four data
DataFrame
s and define yourconfig
dictionary. A complete template is provided. -
Execute Pipeline: Call the master orchestrator function.
# This single call runs all analyses enabled in the config dictionary. full_results = run_full_experiment( initial_production, initial_trade, supply_coefficients, use_coefficients, full_config )
-
Inspect Outputs: Programmatically access any result from the returned dictionary. For example, to view the criticality analysis results:
crit_results = full_results['main_analysis_results']['criticality_analysis_results'] print(crit_results.head())
The run_full_experiment
function returns a single, comprehensive dictionary with two top-level keys:
main_analysis_results
: A dictionary containing the results of the primary analyses (Sensitivity, Criticality, Incremental Disruption) that were enabled in the config.robustness_analysis_results
: A dictionary containing the results of the advanced robustness checks that were enabled in the config.
MRIA_model_macro_impact_disasters/
│
├── MRIA_model_macro_impact_disasters_draft.ipynb # Main implementation notebook
├── requirements.txt # Python package dependencies
├── LICENSE # MIT license file
└── README.md # This documentation file
The pipeline is highly customizable via the master config
dictionary. Users can easily enable/disable any analysis and modify all relevant parameters, including:
- The
disruption_scenario
for the sensitivity analysis. - The
parameter_grid
of resilience factors to test. - The
disruption_magnitude
for the criticality analysis. - The target sectors and
disruption_levels
for the incremental analysis. - All parameters for the suite of robustness checks.
Contributions are welcome. Please fork the repository, create a feature branch, and submit a pull request with a clear description of your changes. Adherence to PEP 8, type hinting, and comprehensive docstrings is required.
This project is licensed under the MIT License. See the LICENSE
file for details.
If you use this code or the methodology in your research, please cite the original paper:
@article{perumal2025assessing,
title={Assessing the Macroeconomic Impacts of Disasters: an Updated Multi-Regional Impact Assessment (MRIA) model},
author={Perumal, Surender Raj Vanniya and Thissen, Mark and de Ruiter, Marleen and Koks, Elco E.},
journal={arXiv preprint arXiv:2508.00510},
year={2025}
}
For the implementation itself, you may cite this repository:
Chirinda, C. (2025). A Python Implementation of "Assessing the Macroeconomic Impacts of Disasters: an Updated Multi-Regional Impact Assessment (MRIA) model".
GitHub repository: https://github.com/chirindaopensource/MRIA_model_macro_impact_disasters
- Credit to Surender Raj Vanniya Perumal, Mark Thissen, Marleen de Ruiter, and Elco E. Koks for their insightful and clearly articulated research.
- Thanks to the developers of the scientific Python ecosystem (
numpy
,pandas
,scipy
) and the Gurobi team for their powerful optimization tools.
This README was generated based on the structure and content of MRIA_model_macro_impact_disasters_draft.ipynb
and follows best practices for research software documentation.