Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
61 changes: 55 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,59 @@
# AGILE
AGILE - Open Global Glacier Data Assimilation Framework
# AGILE
**AGILE Open Global Glacier Data Assimilation Framework**

Work in progress!
This project is an adaptation and extension of [OGGM](https://github.com/OGGM/oggm). It uses OGGM’s dynamical glacier model together with the backward-mode automatic differentiation features of [PyTorch](https://pytorch.org/). This makes it possible to run cost-function–based data assimilation for glacier evolution.

This project is an adaption/extension to [OGGM](https://github.com/OGGM/oggm) and utilizes its dynamical model together with backwards functionalities (Automatic/Algorithmic Differentiation) of [PyTorch](https://pytorch.org/) to enable a cost function based assimilation.
A preprint describing AGILE v0.1 is available [here](https://doi.org/10.5194/egusphere-2025-3401).

---

## Reproducing the results of the [preprint](https://doi.org/10.5194/egusphere-2025-3401)

If you want to reproduce the experiments shown in the preprint, follow these steps:

1. **Download the example experiment script** (Aletsch retreat case):
[run_example_experiment.sh](https://raw.githubusercontent.com/pat-schmitt/AGILE/refs/heads/add_review_request/agile1d/sandbox/paper_v01_code/minimal_run_example/run_example_experiment.sh)

2. **Make the script executable**:
chmod +x run_example_experiment.sh

3. **Run the script**:
./run_example_experiment.sh

Before running the script, make sure you have **[git](https://git-scm.com/)** and **[docker](https://www.docker.com/)** installed.
The script will:
- clone the needed repositories,
- download all required data (~2 GB),
- and run the full experiment.

4. **Running other experiments**
If you want to run more experiments, you can change the file
`mini_experiment_file_fg_oggm.py`
located in (after the first execution of the script):
`agile_workdir/AGILE/agile1d/sandbox/paper_v01_code/minimal_run_example/`

For running **all** experiments from the publication, you can use the example settings here:
<https://github.com/OGGM/AGILE/tree/master/agile1d/sandbox/paper_v01_code/run_scripts>

5. **Create example plots**
After running the experiment(s), you can create an example plot by running:
[create_example_plot.sh](https://raw.githubusercontent.com/pat-schmitt/AGILE/refs/heads/add_review_request/agile1d/sandbox/paper_v01_code/minimal_run_example/create_example_plot.sh)

All plotting scripts used for the figures in the publication are available here:
<https://github.com/OGGM/AGILE/tree/master/agile1d/sandbox/paper_v01_code/plotting_scripts>
Note: You need to run **all** experiments if you want to recreate **all** figures from the publication.

---
## Previous work

**agile2D** (previously *combine2d*) is based on a 2D Shallow-Ice-Approximation model. It uses glacier outlines, surface topography, surface mass-balance time series, and optional ice-thickness measurements for ice caps.
More information:
- Master thesis by @phigre: <https://diglib.uibk.ac.at/ulbtirolhs/content/titleinfo/3086935/full.pdf>
- Repository state: <https://github.com/OGGM/agile/tree/04aa57353f72f272a264be5a4c683ffa7dc5bf0f>

**agile1D** (previously *combine1d*) is based on a 1D flowline model. It uses flowline surface heights, widths, and surface mass-balance time series for valley glaciers.
More information:
- Master thesis by @pat-schmitt: <https://diglib.uibk.ac.at/ulbtirolhs/content/titleinfo/6139027/full.pdf>
- Repository state: <https://github.com/OGGM/agile/tree/bf2f7372787adf3e4f31ba5fd56e8968b9fb3347>

agile2D (previously called combine2d) is based on a dynamical 2D Shallow-Ice-Approximation model, using surface outlines, surface topography, surface mass-balance time series and optionally also existing ice thickness measurements for ice caps. For further information look at the [master thesis](https://diglib.uibk.ac.at/ulbtirolhs/content/titleinfo/3086935/full.pdf) of @phigre ([Repository stage of master thesis](https://github.com/OGGM/agile/tree/04aa57353f72f272a264be5a4c683ffa7dc5bf0f)).

agile1D (previously called combine1d) is based on a dynamical 1D or flowline model, using flowline surface heights and widths as well as surface mass-balance time series for valley glaciers. For further information look at the [master thesis](https://diglib.uibk.ac.at/ulbtirolhs/content/titleinfo/6139027/full.pdf) of @pat-schmitt ([Repository stage of master thesis](https://github.com/OGGM/agile/tree/bf2f7372787adf3e4f31ba5fd56e8968b9fb3347)).
Original file line number Diff line number Diff line change
@@ -0,0 +1,153 @@
#!/usr/bin/env bash
#
# Run a Python plotting script inside the AGILE Docker image, using the
# experiment outputs from a previous run and storing figures in a chosen folder.
#
# Usage:
# chmod +x create_example_plot.sh
# ./create_example_plot.sh
#
# Optional environment variables:
# HOST_WORKDIR - Directory with AGILE repo + experiment outputs
# Default: ./agile_workdir
# AGILE_IMAGE - Docker image to use
# Default: ghcr.io/oggm/agile:20230525
# OUTPUT_DIR - Directory containing experiment outputs
# Default: $HOST_WORKDIR/output
# FIGURES_DIR - Directory where figures should be written
# Default: $HOST_WORKDIR/figures
#
# The Python script can access OUTPUT_DIR and FIGURES_DIR via:
# os.environ["OUTPUT_DIR"]
# os.environ["FIGURES_DIR"]

set -euo pipefail

# --- Configuration -----------------------------------------------------------

HOST_WORKDIR="${HOST_WORKDIR:-$PWD/agile_workdir}"
AGILE_IMAGE="${AGILE_IMAGE:-ghcr.io/oggm/agile:20230525}"

# Where the plotting script lives inside HOST_WORKDIR
RUN_SCRIPTS_SUBDIR="AGILE/agile1d/sandbox/paper_v01_code/minimal_run_example"

# Plotting script name
PLOT_SCRIPT="plot_example_fig.py"

# Directories for input (experiment outputs) and output (figures)
OUTPUT_DIR="${OUTPUT_DIR:-${HOST_WORKDIR}/experiment_results}"
FIGURES_DIR="${FIGURES_DIR:-${HOST_WORKDIR}/figures}"

# --- Basic checks ------------------------------------------------------------

if ! command -v docker >/dev/null 2>&1; then
echo "ERROR: 'docker' not found. Please install Docker."
exit 1
fi

echo "HOST_WORKDIR = ${HOST_WORKDIR}"
echo "OUTPUT_DIR = ${OUTPUT_DIR}"
echo "FIGURES_DIR = ${FIGURES_DIR}"

# Ensure HOST_WORKDIR exists
if [ ! -d "${HOST_WORKDIR}" ]; then
echo "ERROR: HOST_WORKDIR does not exist: ${HOST_WORKDIR}"
echo "Run the experiment script first."
exit 1
fi

cd "${HOST_WORKDIR}"

# Ensure AGILE repo exists
if [ ! -d "AGILE" ]; then
echo "ERROR: AGILE repository missing at ${HOST_WORKDIR}/AGILE"
exit 1
fi

# Ensure plotting script exists
if [ ! -f "${RUN_SCRIPTS_SUBDIR}/${PLOT_SCRIPT}" ]; then
echo "ERROR: plotting script not found:"
echo " ${HOST_WORKDIR}/${RUN_SCRIPTS_SUBDIR}/${PLOT_SCRIPT}"
exit 1
fi

# Ensure OUTPUT_DIR exists
if [ ! -d "${OUTPUT_DIR}" ]; then
echo "ERROR: OUTPUT_DIR does not exist: ${OUTPUT_DIR}"
exit 1
fi

# Create FIGURES_DIR if needed
mkdir -p "${FIGURES_DIR}"

# Ensure both dirs are inside HOST_WORKDIR (required for /work mount)
case "${OUTPUT_DIR}" in
${HOST_WORKDIR}/*) ;;
*) echo "ERROR: OUTPUT_DIR must be inside HOST_WORKDIR."; exit 1;;
esac

case "${FIGURES_DIR}" in
${HOST_WORKDIR}/*) ;;
*) echo "ERROR: FIGURES_DIR must be inside HOST_WORKDIR."; exit 1;;
esac

# Compute container-visible paths
OUTPUT_DIR_IN_CONTAINER="/work${OUTPUT_DIR#${HOST_WORKDIR}}"
FIGURES_DIR_IN_CONTAINER="/work${FIGURES_DIR#${HOST_WORKDIR}}"

echo "Container will use:"
echo " OUTPUT_DIR -> ${OUTPUT_DIR_IN_CONTAINER}"
echo " FIGURES_DIR -> ${FIGURES_DIR_IN_CONTAINER}"

# --- Run container & plotting script -----------------------------------------

docker run --rm -i \
--user "$(id -u):$(id -g)" \
-e OGGM_WORKDIR=/work \
-e OUTPUT_DIR="${OUTPUT_DIR_IN_CONTAINER}" \
-e FIGURES_DIR="${FIGURES_DIR_IN_CONTAINER}" \
-e RUN_SCRIPTS_SUBDIR="${RUN_SCRIPTS_SUBDIR}" \
-e PLOT_SCRIPT="${PLOT_SCRIPT}" \
-v "${HOST_WORKDIR}":/work \
"${AGILE_IMAGE}" \
bash -s <<'EOF'
set -e

# Set a writable HOME under /work so libraries can store caches/configs
export HOME="${OGGM_WORKDIR}/fake_home_plot"
mkdir -p "${HOME}"

# Set up Matplotlib config/cache under HOME
export MPLCONFIGDIR="${HOME}/.config/matplotlib"
mkdir -p "${MPLCONFIGDIR}"

# Set up salem (used by OGGM) cache under HOME
mkdir -p "${HOME}/.salem_cache"

# Create & activate plotting venv
rm -rf "${OGGM_WORKDIR}/plot_env"
python3 -m venv --system-site-packages "${OGGM_WORKDIR}/plot_env"
if [ ! -x "${OGGM_WORKDIR}/plot_env/bin/python3" ] && [ -x "${OGGM_WORKDIR}/plot_env/bin/python" ]; then
ln -s python "${OGGM_WORKDIR}/plot_env/bin/python3"
fi
source "${OGGM_WORKDIR}/plot_env/bin/activate"

# Install packages
pip install --upgrade pip setuptools
pip install seaborn

# Change into directory with the plotting script
cd "/work/${RUN_SCRIPTS_SUBDIR}"

echo "Running plot script: ${PLOT_SCRIPT}"
echo "Using OUTPUT_DIR = ${OUTPUT_DIR}"
echo "Writing figures to FIGURES_DIR = ${FIGURES_DIR}"

python3 "${PLOT_SCRIPT}"

EOF

echo
echo "Plotting finished."
echo "Figures written into: ${FIGURES_DIR}"

Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
import copy
import numpy as np
from agile1d.core.inversion import get_default_inversion_settings

# here specifiy the glaciers which should be used for the experiments
use_experiment_glaciers = ['Aletsch',
#'Baltoro',
#'Artesonraju',
#'Peyto'
]

use_experiment_glacier_states = [#'equilibrium',
'retreating',
#'advancing'
]

# define some glacier specific parameters
inversion_settings_individual = None

# general description of the current experiments
general_description = 'full_run_fg_oggm'

# define different experiments here with file_suffixes for experiment
# description, all possible combinations will be generated
# 'key is inversion setting' = {'exp1': value, 'exp2': value}

# create lambdas dict
# set to True if you want to use all lambda values as in the publication
if False:
start_ex = -4 # == 10^start_ex
end_ex = 3
nr_lam = 29
all_lambdas = np.logspace(start_ex, end_ex, num=nr_lam)
lam_dict = {'lam0': 0}
for i, lam in enumerate(all_lambdas):
lam_dict[f'lam{i + 1}'] = lam
else:
# this is the lambda value used in the Aletsch retreating cases
lam_dict = {'lam9': 0.01}

experiment_options = \
{
'cost_lambda': lam_dict,
'observations': {
# 'obs0': {'fl_surface_h:m': {}},
# 'obs1': {'dmdtda:kg m-2 yr-1': {}},
# 'obs2': {'volume:km3': {}},
# 'obs3': {'fl_surface_h:m': {},
# 'dmdtda:kg m-2 yr-1': {}},
# 'obs4': {'fl_surface_h:m': {},
# 'volume:km3': {}},
# 'obs5': {'dmdtda:kg m-2 yr-1': {},
# 'volume:km3': {}},
'obs6': {'fl_surface_h:m': {},
'dmdtda:kg m-2 yr-1': {},
'volume:km3': {}},
},
'regularisation_terms': {
'reg0': {'smoothed_bed': 1},
},
}

default_inversion_settings_options = get_default_inversion_settings()

default_inversion_settings_options['obs_scaling_parameters'] = {
'uncertainty': {'fl_surface_h:m': {'absolute': 10.},
'dmdtda:kg m-2 yr-1': {'absolute': 100.},
'volume:km3': {'relative': 0.1}}
}

default_inversion_settings_options['spinup_options'] = {
'section': {'extra_grid_points': 10,
'limits': (0.6, 1.4),
'fg_years': 0,
}}
default_inversion_settings_options['experiment_description'] = general_description
default_inversion_settings_options['minimize_options']['disp'] = False
default_inversion_settings_options['minimize_options']['maxiter'] = 100
default_inversion_settings_options['max_time_minimize'] = 60 * 60 * 4, # in s

def recursive_define_inversion_setting(inv_var_list, tmp_inversion_setting):
# global inversion_settings_all
tmp_description = copy.deepcopy(tmp_inversion_setting['experiment_description'])
inv_var = inv_var_list.pop(0)
for inv_var_opt in experiment_options[inv_var]:
tmp_inversion_setting['experiment_description'] = tmp_description + \
'_' + inv_var_opt
tmp_inversion_setting[inv_var] = experiment_options[inv_var][inv_var_opt]

if len(inv_var_list) == 0:
inversion_settings_all.append(copy.deepcopy(tmp_inversion_setting))
else:
recursive_define_inversion_setting(copy.deepcopy(inv_var_list),
tmp_inversion_setting)


# set all experiment combinations together
inversion_settings_all = []
all_experiment_options = list(experiment_options.keys())
recursive_define_inversion_setting(all_experiment_options,
default_inversion_settings_options)
Loading