The data is organized by simulation round, e.g. ISIMIP3a, and data products:
Submitted output data are only published after an embargo period and after explicit permission by the modelling group is given (see also: Workflow for contributions to ISIMIP).
Note: The ISIMIP data used to be publicly available via an ESGF service, but that was decommissioned in May, 2022.
Using the advanced search interface you can search the repository in several ways:
Please note that these restrictions are combined to provide you the datasets you are looking for. In the second box on the right, all current constraints are visible and can be deselected using the cross. A list of all files fulfilling the current search can be downloaded and used with e.g. wget. Furthermore, a click on reset will reset the whole interface.
If you don't find the data you're looking for, please don't hesitate to get in touch.
In the ISIMIP repository, files for the same variable, but different time steps, are combined to datasets, e.g. "gswp3-w5e5_counterclim_tas_global_daily" contains:
A search as described above, will result in a display of a number of these kind of datasets.
For each dataset, the stored Metadata (e.g. the file size or the specifiers from the protocol) and the different files of the dataset can be viewed. A click on the square icon with an arrow will open a separate page for this dataset or file. These landing pages have a unique URL which identifies the dataset or file and which we will keep available even if we should de-publish the actual file in the future.
If the dataset is part of a DOI registered by us, the corresponding citation is also displayed. Each file of the dataset can be downloaded individually or all files can be downloaded at once (this might take a long time). Using the Configure download link opens the mask service for this dataset (more information below). Finally, a file list can be downloaded. This plain text file can be used for a bulk download with tools like wget (see below).
In order to minimize the amount of data you need to download, the repository includes a masking service which can be used to remove all values of a file except a region (a country, a bounding box or just land data). This operation can also be performed on all files of a dataset at once. A link to this interface is included in the search result and on every dateset/file landing page.
Please note that the masking of NetCDF files takes a considerable amount of time. Depending on the size of the dataset, it can take tens of minutes to create the download. It is only possible to mask global files.
In order to download a large amount of files, usage of the wget tool is advised. If not already installed on you machine, you should be able to install it using your distibution (on Linux) or using homebrew (macOS). For Windows 10 , we recommend to run the wget scripts by installing the Windows Subsystem for Linux.
Once installed, a downloaded file list can be used with wget using (the "-c" (continue) flag ensures that already downloaded file are not downloaded again and the "-i" (input file) tells wget to use the downloaded list of files):
wget -ci FILE_LIST.txt
If you already know what files you want, you can use the path of the file directly, e.g.:
wget -c https://files.isimip.org/ISIMIP3b/InputData/climate/atmosphere_composition/co2/historical/co2_historical_annual_1850_2014.txt
You can also download a complete directory using "--mirror --no-parent", e.g.:
wget -c --mirror --no-parent https://files.isimip.org/ISIMIP3b/InputData/climate/atmosphere_composition/
The API of the repository can be accessed at data.isimip.org/api/v1. The important endpoints are "datasets" and "files". Both accept similar options, therefor we will give examples for "datasets" only. A GET request to
returns a paginated list of datasets. The response is a JSON object with "count" (count of all datasets), "next" (link to the next page), "previous" (link to the previous page) and "results" (the list of the datasets).
Since the API is also used by the search interface described above, searches can be done in a similar way. A few examples:
The API can be used for scripted search and download. For Python, we created a small client library with some example Jupyter notebooks.